Fake News Detector Real Time Fake News Detection Using Neural

Bonisiwe Shabane
-
fake news detector real time fake news detection using neural

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity.© Copyright 2025 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions. Scientific Reports volume 15, Article number: 41522 (2025) Cite this article In recent years, the widespread dissemination of fake news on social media has raised concerns about its impact on public opinion, trust, and decision-making. Addressing the limitations of traditional detection methods, this study introduces a hybrid deep learning approach that enhances the identification of fake news. The objective is to improve detection accuracy and model robustness by combining a Long Short-Term Memory (LSTM) network for contextual feature extraction with a Convolutional Gaussian Perceptron Neural Network (CGPNN) for classification.

To further optimize performance, we integrated a metaheuristic Moth-Flame Whale Optimization (MFWO) algorithm for hyperparameter tuning. Experimental evaluation was conducted on four benchmark datasets ISOT, Fakeddit, BuzzFeedNews, and FakeNewsNet using standardized preprocessing techniques and TF-IDF-based text representation. Results show that the proposed model outperforms existing methods, achieving up to 98% accuracy, 95% F1-score, and statistically significant improvements (p < 0.05) over transformer-based and graph neural network models. These findings suggest that the hybrid framework effectively captures linguistic patterns and textual irregularities in deceptive content. The proposed method offers a scalable and efficient solution for fake news detection with practical applications in social media monitoring, digital journalism, and public awareness campaigns. Overall, the framework delivers 3–8% higher accuracy and F1-score compared to state-of-the-art approaches, demonstrating both robustness and practical applicability for large-scale fake news detection.

Fake news has existed long before the advent of digital technology, with the deliberate dissemination of false information dating back to ancient times. However, the proliferation of internet technologies and computational advancements has dramatically transformed the landscape of information sharing. Contemporary digital platforms particularly social media networks have created unprecedented opportunities for content generation and dissemination with minimal barriers to entry1. The information revolution has brought about democratization of information access, yet it has also enabled fast dissemination of both genuine and false content. The replacement of traditional media channels with social media as primary information sources has led to the fast spread of misleading content. False information spreads past its original targets to affect society at large while damaging public trust in authentic news sources and creating false public reactions to factual reporting.

Research showed that fabricated content spread more widely on Facebook and Twitter than accurate reporting during the 2016 U.S. presidential election2. The September 2024 Springfield pet-eating hoax spread false information about Haitian immigrants eating domestic pets after political figures shared it despite its baseless origins from an unsubstantiated social media post3. The financial incentives behind fake news proliferation should not be ignored. Research shows that major technology platforms gain indirect advantages from users engaging with provocative false content. Websites that produce fabricated news generate significant revenue through online advertising systems which creates financial incentives for spreading misinformation4.

The financial aspect became clear in July 2024 when false information about a Southport tragedy led to civil disturbances throughout the United Kingdom5. Unsubstantiated claims about government fund misappropriation to media outlets which independent fact-checkers later disproved demonstrate how misinformation affects public discourse at its highest levels6. Specialized fact-checking websites and platform-integrated tools such as the “community notes” system implemented on X (formerly Twitter) have emerged to fight this trend. The International Federation of Library Associations and Institutions has created frameworks to help users detect unreliable content and Bozkurt et al.8 have conducted systematic assessments of current detection and prevention methods. These initiatives recognize that political strategies frequently build upon misinformation which affects financial markets and investment choices and crisis management. The intentional creation of fake news that looks authentic creates major obstacles for detection systems9.

Social media platforms enable users to share content with their connected network which leads to increased potential impact10. The 2016 survey showed that 62% of American adults obtained news from social media platforms while 49% did in 2012 and 47% used social media as their main news source thus making fake news... The situation demands immediate implementation of effective protective measures against misinformation. The implementation of complete detection systems faces multiple technical obstacles because of reference datasets and event coverage and consumption patterns and verification processes and content divergence11. The research community has developed multiple solutions to tackle these issues yet fake news detection accuracy remains a persistent challenge which drives ongoing investigations into better detection methods. The rapid growth of false information online requires immediate development of automated systems which can analyze large volumes of content.

Deep learning techniques show exceptional potential when used for social network content evaluation12. Traditional fake news detection methods have mainly depended on content analysis of news articles’ intrinsic features while social context models that study information diffusion patterns have been adopted in recent times13. The enormous amount of content and its fast spread across platforms makes manual assessment impossible so automated systems must be developed to quickly assess information reliability. The development of automated models has focused on either news content or social context features. The approaches use data mining algorithms to extract fake news characteristics which are based on established social and psychological theories. A general classification model for fake news identification consists of two stages: feature extraction and model construction from a data mining perspective.

The system extracts relevant content characteristics during feature extraction before using these representations to differentiate between authentic and fabricated news in the model construction phase14,15,47,48,49. In the world of technology, the electronic and technical development of the fields of communication and the internet has increased, which has caused a renaissance in the virtual world. This development has greatly impacted virtual communities for the ease and speed of communication and information transfer through social media platforms, making these platforms likable and easy to use. The social network faces major challenges due to its extensive use. As a result, many people have become involved in cybercrimes. There are accounts on the internet that are malicious.

Platforms for social networking online, such as Facebook and Twitter, allow all users to freely generate and consume massive volumes of material regardless of their traits. While individuals and businesses utilize this information to gain a competitive edge, spam or phony users create important data. According to estimates, 1 in 200 posts on social media contain spam, and 1 in 21 tweets contain spam. The problem was centered around the accuracy of detecting false news and correcting it or preventing its dissemination before it spread in the network. A new method is given based on improving the false news detection system; the level of improvement was significant in the preprocessing stage where Glove is used, which is an unsupervised learning algorithm developed... The basic idea behind the GloVe word embedding is to derive the relationship between the words from statistics.

The proposed method contains deep learning algorithms of convolutional neural network (CNN), deep neural network (DNN), and long short-term memory (LSTM). The RNN with GloVe in the preprocessing stage using the Curpos fake news dataset to enhance the system, due to the sequential processes and classification, has the highest accuracy of 98.974%. This is a preview of subscription content, log in via an institution to check access. Price excludes VAT (USA) Tax calculation will be finalised during checkout. Data is available from the authors upon reasonable request. Abozinadah EA (2016) Improved micro-blog classification for detecting abusive Arabic Twitter accounts.

Int J Data Mining Knowl Manage Process (IJDKP). https://doi.org/10.5121/ijdkp.2016.6602 Also, some marketing campaigns aim to reach the audience by creating a fake story (a piece of fake news) for the company; for example, the British gambling firm Paddy Power faked images of the... Most of the literature about fake news focuses on their automatic detection; we provide a short review of this literature in the web Appendix; some recent contributions consider even real-time detection (Zhang, Gupta, Qin,... This is useful for journalism researchers because it helps to focus efforts to verify pieces of news appearing in digital platforms, blogs, and other communication vehicles on the Internet. Roumeliotis, K.I.; Tselikas, N.D.; Nasiopoulos, D.K.

Fake News Detection and Classification: A Comparative Study of Convolutional Neural Networks, Large Language Models, and Natural Language Processing Models. Future Internet 2025, 17, 28. https://doi.org/10.3390/fi17010028 Roumeliotis KI, Tselikas ND, Nasiopoulos DK. Fake News Detection and Classification: A Comparative Study of Convolutional Neural Networks, Large Language Models, and Natural Language Processing Models. Future Internet.

2025; 17(1):28. https://doi.org/10.3390/fi17010028 Roumeliotis, Konstantinos I., Nikolaos D. Tselikas, and Dimitrios K. Nasiopoulos. 2025.

"Fake News Detection and Classification: A Comparative Study of Convolutional Neural Networks, Large Language Models, and Natural Language Processing Models" Future Internet 17, no. 1: 28. https://doi.org/10.3390/fi17010028 Roumeliotis, K. I., Tselikas, N. D., & Nasiopoulos, D.

K. (2025). Fake News Detection and Classification: A Comparative Study of Convolutional Neural Networks, Large Language Models, and Natural Language Processing Models. Future Internet, 17(1), 28. https://doi.org/10.3390/fi17010028 Scientific Reports volume 15, Article number: 20544 (2025) Cite this article

To improve the accuracy and efficiency of fake news detection, this study proposes a deep learning model that integrates residual networks with attention mechanisms. Building on traditional convolutional neural networks, the model incorporates multi-head attention mechanisms to enhance the extraction of key features from multimodal data such as text, images, and videos. Additionally, residual connections are introduced to deepen the network architecture, mitigate the vanishing gradient problem, and improve the model’s learning depth and stability. Compared with existing approaches, this study introduces several key innovations. First, it constructs a multimodal feature fusion module that integrates text, image, and video data. Second, it designs a cross-modal alignment mechanism to better connect information across different data types.

Third, it optimizes the feature fusion structure for more effective integration. Finally, the study employs attention mechanisms to highlight and enhance the representation of salient features. Experiments were conducted using three representative datasets: the LIAR dataset for political short texts, the FakeNewsNet dataset for English multimodal news, and the Weibo dataset from a Chinese social media platform. These were selected to comprehensively evaluate the model’s performance across different scenarios. Baseline models used for comparison include Bidirectional Encoder Representations from Transformers (BERT), Robustly Optimized Bidirectional Encoder Representations from Transformers Approach (RoBERTa), Generalized Autoregressive Pretraining for Language Understanding (XLNet), Enhanced Representation through Knowledge Integration (ERNIE),... In terms of four key performance metrics—accuracy, precision, recall, and F1 score—the proposed model achieved best-case values of 0.977, 0.986, 0.969, and 0.924, respectively, outperforming the aforementioned baseline models overall.

People Also Search

A Not-for-profit Organization, IEEE Is The World's Largest Technical Professional

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity.© Copyright 2025 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions. Scientific Reports volume 15, Article number: 41522 (2025) Cite this article In recent years, the widespread dissemination ...

To Further Optimize Performance, We Integrated A Metaheuristic Moth-Flame Whale

To further optimize performance, we integrated a metaheuristic Moth-Flame Whale Optimization (MFWO) algorithm for hyperparameter tuning. Experimental evaluation was conducted on four benchmark datasets ISOT, Fakeddit, BuzzFeedNews, and FakeNewsNet using standardized preprocessing techniques and TF-IDF-based text representation. Results show that the proposed model outperforms existing methods, ach...

Fake News Has Existed Long Before The Advent Of Digital

Fake news has existed long before the advent of digital technology, with the deliberate dissemination of false information dating back to ancient times. However, the proliferation of internet technologies and computational advancements has dramatically transformed the landscape of information sharing. Contemporary digital platforms particularly social media networks have created unprecedented oppo...

Research Showed That Fabricated Content Spread More Widely On Facebook

Research showed that fabricated content spread more widely on Facebook and Twitter than accurate reporting during the 2016 U.S. presidential election2. The September 2024 Springfield pet-eating hoax spread false information about Haitian immigrants eating domestic pets after political figures shared it despite its baseless origins from an unsubstantiated social media post3. The financial incentive...

The Financial Aspect Became Clear In July 2024 When False

The financial aspect became clear in July 2024 when false information about a Southport tragedy led to civil disturbances throughout the United Kingdom5. Unsubstantiated claims about government fund misappropriation to media outlets which independent fact-checkers later disproved demonstrate how misinformation affects public discourse at its highest levels6. Specialized fact-checking websites and ...