Social Media Algorithms Exploit How Humans Learn From Their Peers
In prehistoric societies, humans tended to learn from members of our ingroup or from more prestigious individuals, as this information was more likely to be reliable and result in group success. However, with the advent of diverse and complex modern communities -- and especially in social media -- these biases become less effective. For example, a person we are connected to online might not necessarily be trustworthy, and people can easily feign prestige on social media. In a review published in the journal Trends in Cognitive Science on August 3rd, a group of social scientists describe how the functions of social media algorithms are misaligned with human social instincts meant... "Several user surveys now both on Twitter and Facebook suggest most users are exhausted by the political content they see. A lot of users are unhappy, and there's a lot of reputational components that Twitter and Facebook must face when it comes to elections and the spread of misinformation," says first author William Brady,...
"We wanted to put out a systematic review that's trying to help understand how human psychology and algorithms interact in ways that can have these consequences," says Brady. "One of the things that this review brings to the table is a social learning perspective. As social psychologists, we're constantly studying how we can learn from others. This framework is fundamentally important if we want to understand how algorithms influence our social interactions." Humans are biased to learn from others in a way that typically promotes cooperation and collective problem-solving, which is why they tend to learn more from individuals they perceive as a part of their... In addition, when learning biases were first evolving, morally and emotionally charged information was important to prioritize, as this information would be more likely to be relevant to enforcing group norms and ensuring collective...
In contrast, algorithms are usually selecting information that boosts user engagement in order to increase advertising revenue. This means algorithms amplify the very information humans are biased to learn from, and they can oversaturate social media feeds with what the researchers call Prestigious, Ingroup, Moral, and Emotional (PRIME) information, regardless of... As a result, extreme political content or controversial topics are more likely to be amplified, and if users are not exposed to outside opinions, they might find themselves with a false understanding of the... Social Media Algorithms Warp How People Learn from Each Other Social media companies’ drive to keep you on their platforms clashes with how people evolved to learn from each other Social media pushes evolutionary buttons.
The following essay is reprinted with permission from The Conversation, an online publication covering the latest research. People’s daily interactions with online algorithms affect how they learn from others, with negative consequences including social misperceptions, conflict and the spread of misinformation, my colleagues and I have found. Psychology & Psychiatry - August 3, 2023 Social media has become an integral part of our lives. We use it to connect with friends and family, share our thoughts and experiences, and stay up-to-date with the latest news and trends. However, social media platforms are not just a means of communication.
They are also powerful tools that use algorithms to influence our behavior and shape our opinions. One of the ways social media algorithms exploit how humans learn from their peers is through the use of social proof. Social proof is a psychological phenomenon where people conform to the actions and opinions of others in order to fit in and be accepted. Social media platforms use social proof to influence our behavior by showing us what our friends and followers are doing and liking. For example, when we log into Facebook, we see a news feed that is tailored to our interests and preferences. This news feed is generated by an algorithm that takes into account our past behavior on the platform, as well as the behavior of our friends and followers.
The algorithm shows us posts that are similar to the ones we have liked and shared in the past, as well as posts that are popular among our friends and followers. Similarly, when we search for something on Instagram, the platform shows us posts that are relevant to our search query, as well as posts that are popular among our followers. This creates a feedback loop where we are more likely to engage with posts that are already popular, which in turn makes them even more popular. Human social learning is increasingly occurring on online social platforms, such as Twitter, Facebook, and TikTok. On these platforms, algorithms exploit existing social-learning biases (i.e., towards prestigious, ingroup, moral, and emotional information, or 'PRIME' information) to sustain users' attention and maximize engagement. Here, we synthesize emerging insights into 'algorithm-mediated social learning' and propose a framework that examines its consequences in terms of functional misalignment.
We suggest that, when social-learning biases are exploited by algorithms, PRIME information becomes amplified via human-algorithm interactions in the digital social environment in ways that cause social misperceptions and conflict, and spread misinformation. We discuss solutions for reducing functional misalignment, including algorithms promoting bounded diversification and increasing transparency of algorithmic amplification. Keywords: algorithms; norms; social learning; social media; social networks. Copyright © 2023 Elsevier Ltd. All rights reserved. Declaration of interests The authors have no interests to declare.
Assistant Professor of Management and Organizations, Northwestern University William Brady does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment. People’s daily interactions with online algorithms affect how they learn from others, with negative consequences including social misperceptions, conflict and the spread of misinformation, my colleagues and I have found. People are increasingly interacting with others in social media environments where algorithms control the flow of social information they see. Algorithms determine in part which messages, which people and which ideas social media users see. On social media platforms, algorithms are mainly designed to amplify information that sustains engagement, meaning they keep people clicking on content and coming back to the platforms.
I’m a social psychologist, and my colleagues and I have found evidence suggesting that a side effect of this design is that algorithms amplify information people are strongly biased to learn from. We call this information “PRIME,” for prestigious, in-group, moral and emotional information. Social media algorithms amplify information that humans are already biased to learn—leading to feeds that are oversaturated with extreme political content or controversial topics, researchers explain. A review of how human psychology & algorithms interact: https://bit.ly/47GPGE3 Great work by the NU research team. Maybe the problem lies with the owners of these social media platforms that create algorithms that focus on revenue generation at the cost of facilitating authenticity in SOCIAL INTERACTIONS, the (intended) purpose behind the...
Not surprisingly, using data published by these companies, the Comment Rate is between .05 to .09, and while TikTok receives 44% more comments than its competitors, it's based on a low comment rate to... The engagement rate by followers is between less than 1% to 5.43%. Users need to speak with their feet and demand products that create algorithms that better serve their social interaction needs. I suppose it's good to have some scholarly research on the topic, but this isn't really news to anyone who uses social media and there's been no indication that it's a concern to those... Social Media is an absolute cancer, and has done more damage to the collective human psyche than anything before it. (From left) Sens.
John Curtis, R-Utah, and Mark Kelly, D-Ariz., discuss their joint effort to hold big tech accountable for harms caused by social media algorithms with NPR's Steve Inskeep on Nov. 18. Zayrha Rodriguez/NPR hide caption Social media companies and their respective algorithms have repeatedly been accused of fueling political polarization by promoting divisive content on their platforms. Now, two U.S. Senators have introduced legislation aimed at holding tech companies accountable for those business practices.
Sens. John Curtis, R-Utah, and Mark Kelly, D-Ariz., joined Morning Edition host Steve Inskeep to talk about the impact of social media algorithms on U.S. politics and beyond and their plan to address it. Listen to the interview by clicking play on the blue box above.
People Also Search
- Social media algorithms exploit how we learn from our peers
- Social media algorithms exploit how humans learn from their peers
- Social Media Algorithms Warp How People Learn from Each Other
- Algorithm-mediated social learning in online social networks
- PDF Social media algorithms warp how people learn from one another ...
- PDF Social media algorithms exploit how humans learn from their peers
- Social media algorithms warp how people learn from each other, research ...
- Senators push to hold big tech accountable for algorithm impacts : NPR
In Prehistoric Societies, Humans Tended To Learn From Members Of
In prehistoric societies, humans tended to learn from members of our ingroup or from more prestigious individuals, as this information was more likely to be reliable and result in group success. However, with the advent of diverse and complex modern communities -- and especially in social media -- these biases become less effective. For example, a person we are connected to online might not necess...
"We Wanted To Put Out A Systematic Review That's Trying
"We wanted to put out a systematic review that's trying to help understand how human psychology and algorithms interact in ways that can have these consequences," says Brady. "One of the things that this review brings to the table is a social learning perspective. As social psychologists, we're constantly studying how we can learn from others. This framework is fundamentally important if we want t...
In Contrast, Algorithms Are Usually Selecting Information That Boosts User
In contrast, algorithms are usually selecting information that boosts user engagement in order to increase advertising revenue. This means algorithms amplify the very information humans are biased to learn from, and they can oversaturate social media feeds with what the researchers call Prestigious, Ingroup, Moral, and Emotional (PRIME) information, regardless of... As a result, extreme political ...
The Following Essay Is Reprinted With Permission From The Conversation,
The following essay is reprinted with permission from The Conversation, an online publication covering the latest research. People’s daily interactions with online algorithms affect how they learn from others, with negative consequences including social misperceptions, conflict and the spread of misinformation, my colleagues and I have found. Psychology & Psychiatry - August 3, 2023 Social media h...
They Are Also Powerful Tools That Use Algorithms To Influence
They are also powerful tools that use algorithms to influence our behavior and shape our opinions. One of the ways social media algorithms exploit how humans learn from their peers is through the use of social proof. Social proof is a psychological phenomenon where people conform to the actions and opinions of others in order to fit in and be accepted. Social media platforms use social proof to in...