Generative Ai As A Tool For Truth Science

Bonisiwe Shabane
-
generative ai as a tool for truth science

Conversation with a trained chatbot can reduce conspiratorial beliefs. Research output: Contribution to journal › Article › Scientific › peer-review Conversation with a trained chatbot can reduce conspiratorial beliefs. Research output: Contribution to journal › Article › Scientific › peer-review N2 - Conversation with a trained chatbot can reduce conspiratorial beliefs. AB - Conversation with a trained chatbot can reduce conspiratorial beliefs.

Generative artificial intelligence (AI) has received broad criticism for its role in spreading misinformation (1–5). In its 2024 Global Risks Report, the World Economic Forum ranked AI-amplified misinformation as one of the most severe risks that the world currently faces (6). In this context, evidence for the potential positive impacts of AI is particularly welcome. On page 1183 of this issue, Costello et al. (7) report such evidence. The authors recruited more than 2000 conspiracy believers and showed that a brief but personalized conversation with an AI-driven chatbot could durably reduce research subjects’ misinformed beliefs by 20% on average.

Notably, this effect persisted for at least 2 months after the intervention and was observed across a wide range of conspiracy theories. The results challenge conventional wisdom about conspiratorial beliefs and demonstrate that it is possible to counter even deeply entrenched views with sufficiently compelling evidence. Science, vol. 385, September 2024, pp. 1164–1165 Generative artificial intelligence (AI) has quickly become an important l scientific tool, yet its accelerating integration creates both opportunities and challenges.

A workshop on 29 April 2025, organised by the European Parliament’s Panel on the Future of Science and Technology (STOA), will bring together MEPs, Commission representatives, and leading researchers to explore these tensions and... “Certainly, here is a possible introduction for your topic…” began an article published in an Elsevier scientific journal in 2024. To regular ChatGPT users, this language is very familiar. The article, since retracted for using AI without disclosure, sparked debate about the use – and misuse – of Generative AI in science. Generative AI is a branch of machine learning based on transformer models: a type of neural network architecture that can generate new output based on patterns in large amounts of training data. This includes Large Language Models (LLMs), such as ChatGPT, Claude, and Perplexity AI.

Scientists are increasingly using LLMs to help with everything from summarising and brainstorming, to editing, writing and even reviewing articles. At least 10% of abstracts published in 2024 on PubMed were written with LLMs, researchers estimate, and ChatGPT has been even listed as a co-author of several scientific papers. The European Commission is currently writing a European AI in Science Strategy, aimed at accelerating the use of AI in science. Few contest that AI can aid scientific discovery: on stark display when Google DeepMind scientists won the Nobel Prize in Chemistry for developing AlphaFold2 – an AI program which solved the 50-year old problem...

People Also Search

Conversation With A Trained Chatbot Can Reduce Conspiratorial Beliefs. Research

Conversation with a trained chatbot can reduce conspiratorial beliefs. Research output: Contribution to journal › Article › Scientific › peer-review Conversation with a trained chatbot can reduce conspiratorial beliefs. Research output: Contribution to journal › Article › Scientific › peer-review N2 - Conversation with a trained chatbot can reduce conspiratorial beliefs. AB - Conversation with a t...

Generative Artificial Intelligence (AI) Has Received Broad Criticism For Its

Generative artificial intelligence (AI) has received broad criticism for its role in spreading misinformation (1–5). In its 2024 Global Risks Report, the World Economic Forum ranked AI-amplified misinformation as one of the most severe risks that the world currently faces (6). In this context, evidence for the potential positive impacts of AI is particularly welcome. On page 1183 of this issue, Co...

Notably, This Effect Persisted For At Least 2 Months After

Notably, this effect persisted for at least 2 months after the intervention and was observed across a wide range of conspiracy theories. The results challenge conventional wisdom about conspiratorial beliefs and demonstrate that it is possible to counter even deeply entrenched views with sufficiently compelling evidence. Science, vol. 385, September 2024, pp. 1164–1165 Generative artificial intell...

A Workshop On 29 April 2025, Organised By The European

A workshop on 29 April 2025, organised by the European Parliament’s Panel on the Future of Science and Technology (STOA), will bring together MEPs, Commission representatives, and leading researchers to explore these tensions and... “Certainly, here is a possible introduction for your topic…” began an article published in an Elsevier scientific journal in 2024. To regular ChatGPT users, this langu...

Scientists Are Increasingly Using LLMs To Help With Everything From

Scientists are increasingly using LLMs to help with everything from summarising and brainstorming, to editing, writing and even reviewing articles. At least 10% of abstracts published in 2024 on PubMed were written with LLMs, researchers estimate, and ChatGPT has been even listed as a co-author of several scientific papers. The European Commission is currently writing a European AI in Science Stra...