Technological And Policy Approaches To Combatting Disinformation Onlin
Part of the book series: Lecture Notes in Networks and Systems ((LNNS,volume 1507)) Included in the following conference series: Social trust, democratic processes, and public discourse are strongly challenged by the proliferation of disinformation online. This paper aims to explore a dual strategy, namely technological intervention and policy interventions, to prevent the spread of false information online. Technological approaches include AI, machine learning, and blockchain, which can be utilised to detect, mitigate, and prevent the spread of false information. An analysis of its limitations, however, is undertaken in terms of bias and scalability.
This response focuses on the regulatory framework, accountability on the platform, and the initiatives set for educational purposes to build digital literacy and ethical online behaviour. There is an important interplay between these policy approaches, which underscores the need for all sides to combine in the solution to this global crisis. Analysing the case study in light of what currently exists can enlighten stakeholders, from governments to tech companies and civil society, about how best to approach the crisis. The findings highlight the importance of innovative, ethical, and globally coordinated solutions for the safeguarding of information integrity in the digital age. It makes the paper accessible and actionable from an interdisciplinary angle by studying existing frameworks for what has worked or failed, the opportunity for change, and its recommendations to the government, the tech companies,... In so doing, it emphasises a call for coordinated global, ethical, and adaptive strategies to address information integrity in the face of the increasing world.
The internet and social media have brought about a sea change in how information is being shared and consumed, opening unprecedented avenues of communication and connectivity around the globe. However, this digital transformation has also provided an easy pathway for the swift dissemination of false and misleading information, commonly referred to as disinformation. In contrast to misinformation, which is unintentional, disinformation is intentionally created and disseminated with the intent of deceiving, influencing opinions, or achieving political, economic, or social advantages. The consequences of uncontrolled disinformation are far-reaching, from affecting the trust one has in his or her government and influencing elections to worsening health crises and driving social cleavages [1]. This demands that concurrent efforts in policy and technology be brought together immediately. Technical tools such as AI, ML, and blockchains present new avenues for tracing and mitigating false information.
As against this, policies which uphold transparency, platform responsibility, and educative awareness ought to present an appropriate regulatory model in the direction of suitable and ethical measures [2]. Edited by: Ludmilla Huntsman, Cognitive Security Alliance, United States Reviewed by: J. D. Opdyke, DataMineit, LLC, United States Hugh Lawson-Tancred, Birkbeck University of London, United Kingdom
*Correspondence: Alexander Romanishyn, a.romanishyn@ise-group.org Received 2025 Jan 31; Accepted 2025 Jun 30; Collection date 2025. A high-level, evidence-informed guide to some of the major proposals for how democratic governments, platforms, and others can counter disinformation. The Technology and International Affairs Program develops insights to address the governance challenges and large-scale risks of new technologies. Our experts identify actionable best practices and incentives for industry and government leaders on artificial intelligence, cyber threats, cloud security, countering influence operations, reducing the risk of biotechnologies, and ensuring global digital inclusion. The goal of the Partnership for Countering Influence Operations (PCIO) is to foster evidence-based policymaking to counter threats in the information environment.
Key roadblocks as found in our work include the lack of: transparency reporting to inform what data is available for research purposes; rules guiding how data can be shared with researchers and for what... Carnegie’s Information Environment Project is a multistakeholder effort to help policymakers understand the information environment, think through the impact of efforts to govern it, and identify promising interventions to foster democracy. Disinformation is widely seen as a pressing challenge for democracies worldwide. Many policymakers are grasping for quick, effective ways to dissuade people from adopting and spreading false beliefs that degrade democratic discourse and can inspire violent or dangerous actions. Yet disinformation has proven difficult to define, understand, and measure, let alone address. The framework to combat online misinformation provides a strategic roadmap to address the growing challenge of false and misleading information in an interconnected world.
Misinformation threatens public health, democracy, social harmony, and economic stability, requiring proactive and coordinated efforts. The framework integrates global imperatives with localized solutions to build a resilient information ecosystem that fosters trust and informed decision-making. While achieving zero misinformation is the ultimate goal, challenges such as evolving tactics, digital literacy gaps, and content regulation complexities necessitate continuous adaptation and collaboration. At its core, the framework is built on seven pillars: (1) clear definitions and scope, (2) cultural context and sensitivity, (3) legal framework and ethical balance, (4) education and empowerment, (5) technological innovation, (6)... These pillars reflect the dynamic interplay between global standards and local adaptations, providing a structured approach to counter misinformation effectively. Each pillar is underpinned by carefully defined dimensions that offer actionable guidance tailored to address specific challenges and leverage unique opportunities.
The framework’s dimensions delve deeper into the pillars, addressing critical components such as digital literacy, technological advancements, legal safeguards, and cultural adaptations to counter misinformation effectively. To operate the framework, a set of concrete actions is proposed. Concrete actions include fact-checking tools, education initiatives, public-private partnerships, and rapid response mechanisms. By leveraging diverse perspectives and measurable benchmarks, the framework equips stakeholders with an adaptive toolkit to combat misinformation and strengthen the integrity of the digital information landscape. AI makes it easier to create disinformation, false or decontextualized content, and to spread it quickly through existing channels. (Photo: Canva)
In an information ecosystem where misinformation circulates faster than fact-checkers can respond, increasingly precise and efficient tools are needed to verify content, detect hoaxes and understand how false narratives spread. The following list brings together five tools that media outlets and fact-checking organizations use for tasks ranging from tracking disinformation and analyzing its dissemination patterns, to recovering deleted content and analyzing audiovisual material. Fact Check Explorer allows users to insert a phrase, piece of data or a link to check if someone has already verified it. (Photo: Screenshot) Google has developed an ecosystem of fact-checking tools, some for fact-checkers specifically and others for the general public. The flagship tool is Fact Check Explorer, a specialized search engine that compiles claim reviews from multiple fact-checking organizations worldwide, including Chequeado (Argentina), Bolivia Verifica (Bolivia), El Sabueso (Mexico) and Cotejo.info (Venezuela).
The Global Fight Against Information Disorder: Navigating the Murky Waters of Misinformation, Disinformation, and Malinformation The digital age has ushered in an unprecedented era of information accessibility, but this accessibility has a dark side: the proliferation of information disorder. Misinformation (false information shared unintentionally), disinformation (deliberately false information spread to deceive), and malinformation (true information shared maliciously to cause harm) are eroding public trust and threatening the foundations of democratic societies. Governments worldwide are grappling with the complex challenge of combating this infodemic while safeguarding fundamental freedoms. Rebuilding Trust: The Cornerstone of an Informed Society The first line of defense against information disorder is rebuilding public trust.
Erosion of trust in institutions makes citizens more susceptible to misinformation and less likely to accept factual information, even from credible sources. Governments must prioritize transparency, engaging in open communication with the public, acknowledging uncertainties, and readily sharing data. This approach, exemplified by New Zealand’s successful COVID-19 response, fosters trust and cooperation, crucial elements in navigating complex challenges. As Francis Fukuyama highlighted, societies with high levels of trust are better equipped to manage crises, while mistrust fuels instability. Empowering Citizens: The Importance of Media Literacy
People Also Search
- Technological and Policy Approaches to Combatting Disinformation Online
- AI-driven disinformation: policy recommendations for democratic ...
- Countering Disinformation Effectively: An Evidence-Based Policy Guide
- PDF How can we combat online misinformation? - Alan Turing Institute
- Policy Review: Countering Disinformation in the Digital Age - Policies ...
- Combating Online Misinformation: A Comprehensive Framework for ...
- Five tools to detect, analyze and counter disinformation
- Evolving Technological, Legal, and Social Solutions to Counter Online ...
- Governmental Strategies for Countering Misinformation, Disinformation ...
- Misinformation interventions and online sharing behaviour: lessons ...
Part Of The Book Series: Lecture Notes In Networks And
Part of the book series: Lecture Notes in Networks and Systems ((LNNS,volume 1507)) Included in the following conference series: Social trust, democratic processes, and public discourse are strongly challenged by the proliferation of disinformation online. This paper aims to explore a dual strategy, namely technological intervention and policy interventions, to prevent the spread of false informat...
This Response Focuses On The Regulatory Framework, Accountability On The
This response focuses on the regulatory framework, accountability on the platform, and the initiatives set for educational purposes to build digital literacy and ethical online behaviour. There is an important interplay between these policy approaches, which underscores the need for all sides to combine in the solution to this global crisis. Analysing the case study in light of what currently exis...
The Internet And Social Media Have Brought About A Sea
The internet and social media have brought about a sea change in how information is being shared and consumed, opening unprecedented avenues of communication and connectivity around the globe. However, this digital transformation has also provided an easy pathway for the swift dissemination of false and misleading information, commonly referred to as disinformation. In contrast to misinformation, ...
As Against This, Policies Which Uphold Transparency, Platform Responsibility, And
As against this, policies which uphold transparency, platform responsibility, and educative awareness ought to present an appropriate regulatory model in the direction of suitable and ethical measures [2]. Edited by: Ludmilla Huntsman, Cognitive Security Alliance, United States Reviewed by: J. D. Opdyke, DataMineit, LLC, United States Hugh Lawson-Tancred, Birkbeck University of London, United King...
*Correspondence: Alexander Romanishyn, A.romanishyn@ise-group.org Received 2025 Jan 31; Accepted 2025
*Correspondence: Alexander Romanishyn, a.romanishyn@ise-group.org Received 2025 Jan 31; Accepted 2025 Jun 30; Collection date 2025. A high-level, evidence-informed guide to some of the major proposals for how democratic governments, platforms, and others can counter disinformation. The Technology and International Affairs Program develops insights to address the governance challenges and large-sca...