Looking Beyond The Impressions Of Algorithms And Fact Checking In

Bonisiwe Shabane
-
looking beyond the impressions of algorithms and fact checking in

Please note: Providing information about references and citations is only possible thanks to to the open metadata APIs provided by crossref.org and opencitations.net. If citation data of your publications is not openly available yet, then please consider asking your publisher to release your citation data to the public. For more information please see the Initiative for Open Citations (I4OC). Please also note that there is no way of submitting missing references or citation data directly to dblp. Please also note that this feature is work in progress and that it is still far from being perfect. That is, in particular,

JavaScript is requires in order to retrieve and display any references and citations for this record. references and citations temporaily disabled To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.

Researchr is a web site for finding, collecting, sharing, and reviewing scientific publications, for researchers by researchers. Sign up for an account to create a profile with publication list, tag and review your related work, and share bibliographies with your co-authors. Abdurrahman Bello Onifade. Looking beyond the impressions of algorithms and fact-checking in fighting online misinformation: A literature review. Education for Information, 39(1):33-49, 2023. [doi]

Facebook's logo on a smartphone screen in Moscow. Kirill Kudryavtsev/AFP via Getty Images hide caption Is Facebook exacerbating America's political divide? Are viral posts, algorithmically ranked feeds, and partisan echo chambers driving us apart? Do conservatives and liberals exist in ideological bubbles online? New research published Thursday attempts to shed light on these questions.

Four peer-reviewed studies, appearing in the journals Science and Nature, are the first results of a long-awaited, repeatedly delayed collaboration between Facebook and Instagram parent Meta and 17 outside researchers. They investigated social media's role in the 2020 election by examining Facebook and Instagram before, during, and after Election Day. While the researchers were able to tap large swaths of Facebook's tightly held user data, they had little direct insight about the inner workings of its algorithms. The design of the social media giant's algorithms — a complex set of systems that determine whether you're shown your friend's vacation snapshots or a reshared political meme — have come under increasing scrutiny... Those fears crystallized in the aftermath of the 2020 election, when "Stop the Steal" groups on Facebook helped facilitate the Jan. 6th Capitol insurrection.

Received 2022 May 18; Accepted 2023 Jul 6; Collection date 2023. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit... The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from... To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Society often relies on social algorithms that adapt to human behavior.

Yet scientists struggle to generalize the combined behavior of mutually-adapting humans and algorithms. This scientific challenge is a governance problem when algorithms amplify human responses to falsehoods. Could attempts to influence humans have second-order effects on algorithms? Using a large-scale field experiment, I test if influencing readers to fact-check unreliable sources causes news aggregation algorithms to promote or lessen the visibility of those sources. Interventions encouraged readers to fact-check articles or fact-check and provide votes to the algorithm. Across 1104 discussions, these encouragements increased human fact-checking and reduced vote scores on average.

The fact-checking condition also caused the algorithm to reduce the promotion of articles over time by as much as −25 rank positions on average, enough to remove an article from the front page. Overall, this study offers a path for the science of human-algorithm behavior by experimentally demonstrating how influencing collective human behavior can also influence algorithm behavior. Subject terms: Human behaviour, Computational science In recent years, communications technologies have broadened access to so much information that society relies on automated systems to filter, rank, suggest, and inform human thoughts and actions. These algorithms have been implicated in many complex patterns in collective human behavior, including misinformation1, voting behavior2,3, social movements4,5, charitable donations6, sexist and racist harassment7,8, public safety9, and extremism10. Consequently, the work of maintaining democratic societies now involves managing algorithms as well as people11–13.

Explaining collective human and algorithm behavior has become an urgent scientific question due to these pragmatic concerns14. About seven-in-ten Americans use social media to connect with others, share aspects of their lives and consume information. The connections and content they encounter on these sites are shaped not just by their own decisions, but also by the algorithms and artificial intelligence technologies that govern many aspects of these online environments. Social media companies use algorithms for a variety of functions on their platforms, including to decide and structure what flow of content users see; figure out what ads a user will like; make recommendations... The companies also use these algorithms to scale up efforts to identify false information on their sites – recognizing the pressing challenge of halting the spread of misinformation on their platforms, but also faced... While a variety of approaches can be used to find content that does not pass fact-checking standards and predict similar posts, the challenges of modern content moderation often require more efficient and scalable approaches...

Pew Research Center’s November survey reveals a public relatively split when it comes to whether algorithms for finding false information on these platforms are good or bad for society at large – and similarly... It also finds Republicans particularly opposed to such algorithms, echoing partisan divides in other Center research related to technology and online discourse – from the seriousness of offensive content online to whether tech companies... Asked about the widespread use of these computer programs by social media companies to find false information on their sites, 38% of U.S. adults think this has been a good idea for society. But 31% say this has been a bad idea, and a similar share say they are not sure. Companies have taken action on posts they determine contain falsehoods, including adding fact-check labels to misinformation relating to the 2020 presidential election and the coronavirus.

Many people say they have seen these downstream impacts of algorithms’ work: About three-quarters of social media users (74%) say they have ever seen information on social media sites that has been flagged or...

People Also Search

Please Note: Providing Information About References And Citations Is Only

Please note: Providing information about references and citations is only possible thanks to to the open metadata APIs provided by crossref.org and opencitations.net. If citation data of your publications is not openly available yet, then please consider asking your publisher to release your citation data to the public. For more information please see the Initiative for Open Citations (I4OC). Plea...

JavaScript Is Requires In Order To Retrieve And Display Any

JavaScript is requires in order to retrieve and display any references and citations for this record. references and citations temporaily disabled To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see ...

Researchr Is A Web Site For Finding, Collecting, Sharing, And

Researchr is a web site for finding, collecting, sharing, and reviewing scientific publications, for researchers by researchers. Sign up for an account to create a profile with publication list, tag and review your related work, and share bibliographies with your co-authors. Abdurrahman Bello Onifade. Looking beyond the impressions of algorithms and fact-checking in fighting online misinformation:...

Facebook's Logo On A Smartphone Screen In Moscow. Kirill Kudryavtsev/AFP

Facebook's logo on a smartphone screen in Moscow. Kirill Kudryavtsev/AFP via Getty Images hide caption Is Facebook exacerbating America's political divide? Are viral posts, algorithmically ranked feeds, and partisan echo chambers driving us apart? Do conservatives and liberals exist in ideological bubbles online? New research published Thursday attempts to shed light on these questions.

Four Peer-reviewed Studies, Appearing In The Journals Science And Nature,

Four peer-reviewed studies, appearing in the journals Science and Nature, are the first results of a long-awaited, repeatedly delayed collaboration between Facebook and Instagram parent Meta and 17 outside researchers. They investigated social media's role in the 2020 election by examining Facebook and Instagram before, during, and after Election Day. While the researchers were able to tap large s...