Pdf Social Media Algorithms Exploit How Humans Learn From Their Peers

Bonisiwe Shabane
-
pdf social media algorithms exploit how humans learn from their peers

Social Media Algorithms Warp How People Learn from Each Other Social media companies’ drive to keep you on their platforms clashes with how people evolved to learn from each other Social media pushes evolutionary buttons. The following essay is reprinted with permission from The Conversation, an online publication covering the latest research. People’s daily interactions with online algorithms affect how they learn from others, with negative consequences including social misperceptions, conflict and the spread of misinformation, my colleagues and I have found. Human social learning is increasingly occurring on online social platforms, such as Twitter, Facebook, and TikTok.

On these platforms, algorithms exploit existing social-learning biases (i.e., towards prestigious, ingroup, moral, and emotional information, or 'PRIME' information) to sustain users' attention and maximize engagement. Here, we synthesize emerging insights into 'algorithm-mediated social learning' and propose a framework that examines its consequences in terms of functional misalignment. We suggest that, when social-learning biases are exploited by algorithms, PRIME information becomes amplified via human-algorithm interactions in the digital social environment in ways that cause social misperceptions and conflict, and spread misinformation. We discuss solutions for reducing functional misalignment, including algorithms promoting bounded diversification and increasing transparency of algorithmic amplification. Keywords: algorithms; norms; social learning; social media; social networks. Copyright © 2023 Elsevier Ltd.

All rights reserved. Declaration of interests The authors have no interests to declare. In prehistoric societies, humans tended to learn from members of our ingroup or from more prestigious individuals, as this information was more likely to be reliable and result in group success. However, with the advent of diverse and complex modern communities -- and especially in social media -- these biases become less effective. For example, a person we are connected to online might not necessarily be trustworthy, and people can easily feign prestige on social media. In a review published in the journal Trends in Cognitive Science on August 3rd, a group of social scientists describe how the functions of social media algorithms are misaligned with human social instincts meant...

"Several user surveys now both on Twitter and Facebook suggest most users are exhausted by the political content they see. A lot of users are unhappy, and there's a lot of reputational components that Twitter and Facebook must face when it comes to elections and the spread of misinformation," says first author William Brady,... "We wanted to put out a systematic review that's trying to help understand how human psychology and algorithms interact in ways that can have these consequences," says Brady. "One of the things that this review brings to the table is a social learning perspective. As social psychologists, we're constantly studying how we can learn from others. This framework is fundamentally important if we want to understand how algorithms influence our social interactions."

Humans are biased to learn from others in a way that typically promotes cooperation and collective problem-solving, which is why they tend to learn more from individuals they perceive as a part of their... In addition, when learning biases were first evolving, morally and emotionally charged information was important to prioritize, as this information would be more likely to be relevant to enforcing group norms and ensuring collective... In contrast, algorithms are usually selecting information that boosts user engagement in order to increase advertising revenue. This means algorithms amplify the very information humans are biased to learn from, and they can oversaturate social media feeds with what the researchers call Prestigious, Ingroup, Moral, and Emotional (PRIME) information, regardless of... As a result, extreme political content or controversial topics are more likely to be amplified, and if users are not exposed to outside opinions, they might find themselves with a false understanding of the... Psychology & Psychiatry - August 3, 2023

Social media has become an integral part of our lives. We use it to connect with friends and family, share our thoughts and experiences, and stay up-to-date with the latest news and trends. However, social media platforms are not just a means of communication. They are also powerful tools that use algorithms to influence our behavior and shape our opinions. One of the ways social media algorithms exploit how humans learn from their peers is through the use of social proof. Social proof is a psychological phenomenon where people conform to the actions and opinions of others in order to fit in and be accepted.

Social media platforms use social proof to influence our behavior by showing us what our friends and followers are doing and liking. For example, when we log into Facebook, we see a news feed that is tailored to our interests and preferences. This news feed is generated by an algorithm that takes into account our past behavior on the platform, as well as the behavior of our friends and followers. The algorithm shows us posts that are similar to the ones we have liked and shared in the past, as well as posts that are popular among our friends and followers. Similarly, when we search for something on Instagram, the platform shows us posts that are relevant to our search query, as well as posts that are popular among our followers. This creates a feedback loop where we are more likely to engage with posts that are already popular, which in turn makes them even more popular.

image: Diagram of how algorithms can lead to social misperceptions CREDIT Trends in Cognitive Science Brady et al. view more Credit: Trends in Cognitive Science Brady et al. In prehistoric societies, humans tended to learn from members of our ingroup or from more prestigious individuals, as this information was more likely to be reliable and result in group success. However, with the advent of diverse and complex modern communities—and especially in social media—these biases become less effective. For example, a person we are connected to online might not necessarily be trustworthy, and people can easily feign prestige on social media.

In a review published in the journal Trends in Cognitive Science on August 3rd, a group of social scientists describe how the functions of social media algorithms are misaligned with human social instincts meant... “Several user surveys now both on Twitter and Facebook suggest most users are exhausted by the political content they see. A lot of users are unhappy, and there’s a lot of reputational components that Twitter and Facebook must face when it comes to elections and the spread of misinformation,” says first author William Brady... “We wanted to put out a systematic review that’s trying to help understand how human psychology and algorithms interact in ways that can have these consequences,” says Brady. “One of the things that this review brings to the table is a social learning perspective. As social psychologists, we’re constantly studying how we can learn from others.

This framework is fundamentally important if we want to understand how algorithms influence our social interactions.”

People Also Search

Social Media Algorithms Warp How People Learn From Each Other

Social Media Algorithms Warp How People Learn from Each Other Social media companies’ drive to keep you on their platforms clashes with how people evolved to learn from each other Social media pushes evolutionary buttons. The following essay is reprinted with permission from The Conversation, an online publication covering the latest research. People’s daily interactions with online algorithms aff...

On These Platforms, Algorithms Exploit Existing Social-learning Biases (i.e., Towards

On these platforms, algorithms exploit existing social-learning biases (i.e., towards prestigious, ingroup, moral, and emotional information, or 'PRIME' information) to sustain users' attention and maximize engagement. Here, we synthesize emerging insights into 'algorithm-mediated social learning' and propose a framework that examines its consequences in terms of functional misalignment. We sugges...

All Rights Reserved. Declaration Of Interests The Authors Have No

All rights reserved. Declaration of interests The authors have no interests to declare. In prehistoric societies, humans tended to learn from members of our ingroup or from more prestigious individuals, as this information was more likely to be reliable and result in group success. However, with the advent of diverse and complex modern communities -- and especially in social media -- these biases ...

"Several User Surveys Now Both On Twitter And Facebook Suggest

"Several user surveys now both on Twitter and Facebook suggest most users are exhausted by the political content they see. A lot of users are unhappy, and there's a lot of reputational components that Twitter and Facebook must face when it comes to elections and the spread of misinformation," says first author William Brady,... "We wanted to put out a systematic review that's trying to help unders...

Humans Are Biased To Learn From Others In A Way

Humans are biased to learn from others in a way that typically promotes cooperation and collective problem-solving, which is why they tend to learn more from individuals they perceive as a part of their... In addition, when learning biases were first evolving, morally and emotionally charged information was important to prioritize, as this information would be more likely to be relevant to enforci...