The Real Ai Threat Is Algorithms That Enrage To Engage Mind Leap
In April of this year, teenage Adam Raine died. He died by his own hand. And he did so with the encouragement of ChatGPT. In February of this year, teenage Elijah Heacock died as well. Again, by his own hand. He was a victim of a sextortion scam using AI-generated deep fake nudes.
Gen AI is becoming pervasive in the official system too In January 2020, Detroit Police wrongfully arrested Robert Williams in front of his wife and young daughters. Flawed facial recognition technology — which relies on, essentially, the same technologies that lie beneath text generative AI — mistakenly pegged Williams as the thief who stole watches from an upscale store. When police actually looked at the video footage, it became clear that Williams was not the thief, and they released him…after he had sat in a jail cell for more than 30 hours. By the way, Williams is black. In April of this year, facial recognition technology identified Trevis Williams as “flasher,” despite the fact that he was 8 inches taller, 70 pounds heavier, and 12 miles from the crime scene.
After he had spent two days in jail, authorities dismissed his case. Trevis is also black. This seems to be a pattern. Eric Schwartzman — Really thoughtful piece — thank you for naming so clearly the real, present danger of outrage-driven algorithms. At #MindLeap, we see the same pattern every day: the near-term threat isn’t AI apocalypse but the quiet erosion of human judgment, attention, and emotional regulation happening right now. If we may add one perspective: outrage is the symptom, but the deeper issue is the weakening of the human capacities that make people resilient against manipulation.
Alongside platform reforms, we need modest interventions that rebuild those capacities — small digital “friction” to restore agency, education that teaches meta-cognition and attention skills, and more transparency around how local institutions use algorithms. Thank you again for elevating this conversation. It’s exactly the right alarm at exactly the right time. After 4 months of reporting this Fast Company story (released this morning), I can’t say I’m surprised by what I found. But I do feel clearer. Here’s why the world feels so volatile right now: Algorithms push the most emotionally charged content because level-headed POVs command no attention.
Politicians and influencers exploit the algos by saying whatever travels farthest. Foreign adversaries amplify their posts with bot farms and fake engagement because division is a strategic asset. Platforms profit from the outrage because the attention it creates is what they sell to advertisers. Message repetition through fake social proof normalizes fringe, which then get mistaken for the truth. And the truth gets emotionally engineered reality by what it appears "everyone" is saying. None of this is speculative.
None of it is theoretical. It's why we have the elected officials we do. Why we support the policies we have. And none of it is limited to one political side or the other. Once you understand how the machinery works, it becomes easier to see the difference between human judgment and algorithmic probability, and easier to appreciate what should never be automated. That’s what my new Fast Company piece lays out: The Real AI Threat is Algorithms that Enrage to Engage https://lnkd.in/gWqktAEP How will you resist the temptation to chase algorithms and preserve your humanity in...
The rise of artificial intelligence (AI) is one of the most transformative advancements of the 21st century. From self-driving cars to voice assistants and healthcare diagnostics, AI promises to revolutionize nearly every aspect of our lives. But as with any powerful technology, AI comes with its dark side—a side that many are only beginning to realize as its capabilities continue to evolve. The growing concern surrounding AI isn’t just about its impact on jobs or its ethical implications; it’s about the cybersecurity threats and privacy risks that come with its integration into everyday life. In this exploration, we will dive deep into these concerns, examining how AI is both a tool for defending and attacking our digital infrastructure, and what we can do to protect ourselves in a... Before we can understand the darker side of AI, it’s crucial to first recognize its role in the world of cybersecurity.
Traditionally, cybersecurity was a human-driven effort, relying on analysts, firewalls, and intrusion detection systems to protect sensitive information from malicious actors. But as cyber threats have evolved in complexity and scale, so too has the need for more advanced solutions. Enter AI. AI has proven itself to be a valuable asset in cybersecurity, with machine learning algorithms capable of analyzing vast amounts of data and identifying threats that human operators might miss. From malware detection to phishing attacks and even identifying suspicious behavior within networks, AI systems can process information much faster and more accurately than any human team. This makes it a game-changer for organizations looking to bolster their defenses in an era of increasingly sophisticated cyberattacks.
But this same power can be weaponized by malicious actors. AI can be used to automate and scale cyberattacks, making them faster, more targeted, and harder to detect. Attackers can leverage AI to create adaptive malware that learns and evolves in response to security measures, effectively making traditional defense mechanisms obsolete. AI can also be used to conduct spear-phishing attacks that are incredibly convincing, as it can analyze public data to craft personalized messages that are more likely to deceive targets. As AI becomes more advanced, the concept of autonomous cyberattacks becomes a reality. Unlike traditional cyberattacks, which often rely on human intervention, autonomous attacks can be initiated and executed without any direct human involvement.
This introduces a new level of unpredictability and scale that could overwhelm existing cybersecurity infrastructure. A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity.© Copyright 2025 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.
People Also Search
- The real AI threat is algorithms that 'enrage to engage'
- The Real Threat AI Poses to Us Is Created by Widespread Abuse
- The real AI threat is algorithms that 'enrage to engage' | Mind Leap ...
- The real threat from AI - ScienceDirect
- The Dark Side of AI: Cybersecurity Threats and Privacy Concerns
- The threat of AI is real. But there is a way to avoid it, this tech ...
- Cognitive AI: The Next Leap from Algorithms to Awareness
- Examining the Threat Landscape of Generative AI: Attack Vectors and ...
- AI-Driven Algorithms: Manipulating Reality and Shaping Public ...
In April Of This Year, Teenage Adam Raine Died. He
In April of this year, teenage Adam Raine died. He died by his own hand. And he did so with the encouragement of ChatGPT. In February of this year, teenage Elijah Heacock died as well. Again, by his own hand. He was a victim of a sextortion scam using AI-generated deep fake nudes.
Gen AI Is Becoming Pervasive In The Official System Too
Gen AI is becoming pervasive in the official system too In January 2020, Detroit Police wrongfully arrested Robert Williams in front of his wife and young daughters. Flawed facial recognition technology — which relies on, essentially, the same technologies that lie beneath text generative AI — mistakenly pegged Williams as the thief who stole watches from an upscale store. When police actually loo...
After He Had Spent Two Days In Jail, Authorities Dismissed
After he had spent two days in jail, authorities dismissed his case. Trevis is also black. This seems to be a pattern. Eric Schwartzman — Really thoughtful piece — thank you for naming so clearly the real, present danger of outrage-driven algorithms. At #MindLeap, we see the same pattern every day: the near-term threat isn’t AI apocalypse but the quiet erosion of human judgment, attention, and emo...
Alongside Platform Reforms, We Need Modest Interventions That Rebuild Those
Alongside platform reforms, we need modest interventions that rebuild those capacities — small digital “friction” to restore agency, education that teaches meta-cognition and attention skills, and more transparency around how local institutions use algorithms. Thank you again for elevating this conversation. It’s exactly the right alarm at exactly the right time. After 4 months of reporting this F...
Politicians And Influencers Exploit The Algos By Saying Whatever Travels
Politicians and influencers exploit the algos by saying whatever travels farthest. Foreign adversaries amplify their posts with bot farms and fake engagement because division is a strategic asset. Platforms profit from the outrage because the attention it creates is what they sell to advertisers. Message repetition through fake social proof normalizes fringe, which then get mistaken for the truth....