Social Media Algorithms Exploit How We Learn From Our Peers
In prehistoric societies, humans tended to learn from members of our ingroup or from more prestigious individuals, as this information was more likely to be reliable and result in group success. However, with the advent of diverse and complex modern communities -- and especially in social media -- these biases become less effective. For example, a person we are connected to online might not necessarily be trustworthy, and people can easily feign prestige on social media. In a review published in the journal Trends in Cognitive Science on August 3rd, a group of social scientists describe how the functions of social media algorithms are misaligned with human social instincts meant... "Several user surveys now both on Twitter and Facebook suggest most users are exhausted by the political content they see. A lot of users are unhappy, and there's a lot of reputational components that Twitter and Facebook must face when it comes to elections and the spread of misinformation," says first author William Brady,...
"We wanted to put out a systematic review that's trying to help understand how human psychology and algorithms interact in ways that can have these consequences," says Brady. "One of the things that this review brings to the table is a social learning perspective. As social psychologists, we're constantly studying how we can learn from others. This framework is fundamentally important if we want to understand how algorithms influence our social interactions." Humans are biased to learn from others in a way that typically promotes cooperation and collective problem-solving, which is why they tend to learn more from individuals they perceive as a part of their... In addition, when learning biases were first evolving, morally and emotionally charged information was important to prioritize, as this information would be more likely to be relevant to enforcing group norms and ensuring collective...
In contrast, algorithms are usually selecting information that boosts user engagement in order to increase advertising revenue. This means algorithms amplify the very information humans are biased to learn from, and they can oversaturate social media feeds with what the researchers call Prestigious, Ingroup, Moral, and Emotional (PRIME) information, regardless of... As a result, extreme political content or controversial topics are more likely to be amplified, and if users are not exposed to outside opinions, they might find themselves with a false understanding of the... Social Media Algorithms Warp How People Learn from Each Other Social media companies’ drive to keep you on their platforms clashes with how people evolved to learn from each other Social media pushes evolutionary buttons.
The following essay is reprinted with permission from The Conversation, an online publication covering the latest research. People’s daily interactions with online algorithms affect how they learn from others, with negative consequences including social misperceptions, conflict and the spread of misinformation, my colleagues and I have found. A small tweak to your social media feed can make your opponents feel a little less like enemies. In a new study published in Science, a Stanford-led team used a browser extension and a large language model to rerank posts on X during the 2024 U.S. presidential campaign, showing that changing the visibility of the most hostile political content can measurably dial down partisan heat without deleting a single post or asking the platform for permission. The experiment, run with 1,256 Democrats and Republicans who used X in the weeks after an attempted assassination of Donald Trump and the withdrawal of Joe Biden from the race, targeted a particular kind...
The researchers focused on posts that expressed antidemocratic attitudes and partisan animosity, such as cheering political violence, rejecting bipartisan cooperation, or suggesting that democratic rules are expendable when they get in the way of... To reach inside a platform they did not control, first author Tiziano Piccardi and colleagues built a browser extension that quietly intercepted the web version of the X timeline. Every time a participant opened the For you feed, the extension captured the posts, sent them to a remote backend, and had a large language model score each political post on eight dimensions of... If a post hit at least four of those eight factors, it was tagged as the kind of content most likely to inflame. The tool then reordered the feed for consenting users in real time. In one experiment, it pushed those posts down the feed so participants would need to scroll further to hit the worst material.
In a parallel experiment, it did the opposite and pulled that content higher. “Social media algorithms directly impact our lives, but until now, only the platforms had the ability to understand and shape them,” said Michael Bernstein, a professor of computer science in Stanford’s School of Engineering... “We have demonstrated an approach that lets researchers and end users have that power.” Human social learning is increasingly occurring on online social platforms, such as Twitter, Facebook, and TikTok. On these platforms, algorithms exploit existing social-learning biases (i.e., towards prestigious, ingroup, moral, and emotional information, or 'PRIME' information) to sustain users' attention and maximize engagement. Here, we synthesize emerging insights into 'algorithm-mediated social learning' and propose a framework that examines its consequences in terms of functional misalignment.
We suggest that, when social-learning biases are exploited by algorithms, PRIME information becomes amplified via human-algorithm interactions in the digital social environment in ways that cause social misperceptions and conflict, and spread misinformation. We discuss solutions for reducing functional misalignment, including algorithms promoting bounded diversification and increasing transparency of algorithmic amplification. Keywords: algorithms; norms; social learning; social media; social networks. Copyright © 2023 Elsevier Ltd. All rights reserved. Declaration of interests The authors have no interests to declare.
A study shows that the order in which platforms like X display content to their users affects their animosity towards other ideological groups A team of U.S. researchers has shown that the order in which political messages are displayed on social media platforms does affect polarization — one of the most debated issues since the rise of social media and the... The phenomenon is equally strong regardless of the user’s political orientation, the academics note in an article published on Thursday in Science. Social media is an important source of political information. For hundreds of millions of people worldwide, it is even the main channel for political engagement: they receive political content, share it, and express their opinions through these platforms.
Given the relevance of social media in this sphere, understanding how the algorithms that operate on these platforms work is crucial — but opacity is the norm in the industry. That makes it extremely difficult to estimate the extent to which the selection of highlighted content shapes users’ political views. How did the researchers overcome algorithmic opacity to alter the order of posts that social media users see? Tiziano Piccardi from Stanford University and his colleagues developed a browser extension that intercepts and reorders the feed (the chronological timeline of posts) of certain social networks in real time. The tool uses a large language model (LLM) to assign a score to each piece of content, measuring the extent to which it contained “antidemocratic attitudes and partisan animosity” (AAPA). Once scored, the posts were reordered one way or another — without any collaboration from the platform or reliance on its algorithm.
The experiment involved 1,256 participants, who had all been duly informed. The study focused on X, as it is the social network most used in the U.S. for expressing political opinions, and it was conducted during the weeks leading up to the 2024 presidential election to ensure a high circulation of political messages. Assistant Professor of Management and Organizations, Northwestern University William Brady does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment. People’s daily interactions with online algorithms affect how they learn from others, with negative consequences including social misperceptions, conflict and the spread of misinformation, my colleagues and I have found.
People are increasingly interacting with others in social media environments where algorithms control the flow of social information they see. Algorithms determine in part which messages, which people and which ideas social media users see. On social media platforms, algorithms are mainly designed to amplify information that sustains engagement, meaning they keep people clicking on content and coming back to the platforms. I’m a social psychologist, and my colleagues and I have found evidence suggesting that a side effect of this design is that algorithms amplify information people are strongly biased to learn from. We call this information “PRIME,” for prestigious, in-group, moral and emotional information.
People Also Search
- Social media algorithms exploit how we learn from our peers
- Social media algorithms exploit how humans learn from their peers
- Social Media Algorithms Warp How People Learn from Each Other
- PDF Social media algorithms exploit how humans learn from their peers
- PDF Social media algorithms warp how people learn from one another ...
- Algorithms Can Pull Us Apart. This Tool Shows They Can Bring Us Back
- Algorithm-mediated social learning in online social networks
- The real AI threat is algorithms that 'enrage to engage'
- Algorithms do widen the divide: Social media feeds shape political ...
- Social media algorithms warp how people learn from each other, research ...
In Prehistoric Societies, Humans Tended To Learn From Members Of
In prehistoric societies, humans tended to learn from members of our ingroup or from more prestigious individuals, as this information was more likely to be reliable and result in group success. However, with the advent of diverse and complex modern communities -- and especially in social media -- these biases become less effective. For example, a person we are connected to online might not necess...
"We Wanted To Put Out A Systematic Review That's Trying
"We wanted to put out a systematic review that's trying to help understand how human psychology and algorithms interact in ways that can have these consequences," says Brady. "One of the things that this review brings to the table is a social learning perspective. As social psychologists, we're constantly studying how we can learn from others. This framework is fundamentally important if we want t...
In Contrast, Algorithms Are Usually Selecting Information That Boosts User
In contrast, algorithms are usually selecting information that boosts user engagement in order to increase advertising revenue. This means algorithms amplify the very information humans are biased to learn from, and they can oversaturate social media feeds with what the researchers call Prestigious, Ingroup, Moral, and Emotional (PRIME) information, regardless of... As a result, extreme political ...
The Following Essay Is Reprinted With Permission From The Conversation,
The following essay is reprinted with permission from The Conversation, an online publication covering the latest research. People’s daily interactions with online algorithms affect how they learn from others, with negative consequences including social misperceptions, conflict and the spread of misinformation, my colleagues and I have found. A small tweak to your social media feed can make your o...
The Researchers Focused On Posts That Expressed Antidemocratic Attitudes And
The researchers focused on posts that expressed antidemocratic attitudes and partisan animosity, such as cheering political violence, rejecting bipartisan cooperation, or suggesting that democratic rules are expendable when they get in the way of... To reach inside a platform they did not control, first author Tiziano Piccardi and colleagues built a browser extension that quietly intercepted the w...