The Real Ai Threat Is Algorithms That Enrage To Engage
Eric Schwartzman — Really thoughtful piece — thank you for naming so clearly the real, present danger of outrage-driven algorithms. At #MindLeap, we see the same pattern every day: the near-term threat isn’t AI apocalypse but the quiet erosion of human judgment, attention, and emotional regulation happening right now. If we may add one perspective: outrage is the symptom, but the deeper issue is the weakening of the human capacities that make people resilient against manipulation. Alongside platform reforms, we need modest interventions that rebuild those capacities — small digital “friction” to restore agency, education that teaches meta-cognition and attention skills, and more transparency around how local institutions use algorithms. Thank you again for elevating this conversation. It’s exactly the right alarm at exactly the right time.
After 4 months of reporting this Fast Company story (released this morning), I can’t say I’m surprised by what I found. But I do feel clearer. Here’s why the world feels so volatile right now: Algorithms push the most emotionally charged content because level-headed POVs command no attention. Politicians and influencers exploit the algos by saying whatever travels farthest. Foreign adversaries amplify their posts with bot farms and fake engagement because division is a strategic asset. Platforms profit from the outrage because the attention it creates is what they sell to advertisers.
Message repetition through fake social proof normalizes fringe, which then get mistaken for the truth. And the truth gets emotionally engineered reality by what it appears "everyone" is saying. None of this is speculative. None of it is theoretical. It's why we have the elected officials we do. Why we support the policies we have.
And none of it is limited to one political side or the other. Once you understand how the machinery works, it becomes easier to see the difference between human judgment and algorithmic probability, and easier to appreciate what should never be automated. That’s what my new Fast Company piece lays out: The Real AI Threat is Algorithms that Enrage to Engage https://lnkd.in/gWqktAEP How will you resist the temptation to chase algorithms and preserve your humanity in... Etay Maor is Chief Security Strategist for Cato Networks, a leader of advanced cloud-native cybersecurity technologies. As society integrates AI into sectors such as healthcare, finance, manufacturing and transport, the potential for catastrophic blowback grows if these autonomous bytes are not properly regulated and monitored. Listed below are some heads-up threats to be aware of as organizations prepare their risk management plans for 2025.
Agentic AI systems are AI agents that aren't just responding to prompts or generating content—they're making decisions or executing complex tasks without human oversight (think autonomous vehicles). Since these systems have excessive agency and possess deep access to data, code and functions, they will be a hot target for malicious threat actors. Attackers may induce undesirable behaviors or create malicious code and outputs by corrupting AI's training data or manipulating its algorithms. Threat actors can embed backdoors or discover ways to circumvent AI-based protections such as automated fraud detection. Shadow AI is where employees deploy AI tools without organizational approval or oversight. This practice can bypass established security protocols, creating blind spots in an organization's defenses and introducing unmonitored vulnerabilities.
May 15, 2025By Charlie Lewis, Ida Kristensen, and Jeffrey Caso with Julian Fuchs Artificial intelligence is not just changing cybersecurity—it’s redefining it. At the 2025 RSA Conference in San Francisco, where more than 40,000 cybersecurity and technology professionals convened, one theme stood out: AI is rapidly reshaping the cybersecurity landscape, bringing both unprecedented opportunities and significant... Discussions about the emerging role of agentic AI revealed how deeply AI is embedded in the future of cyber operations. As AI quickly advances cyber threats, organizations seem to be taking a more cautious approach, balancing the benefits and risks of the new technology while trying to keep pace with attackers’ increasing sophistication. In turn, the cybersecurity market will benefit from two modes of growth: CISOs and cyber-risk professionals are embracing next-generation, AI-enabled security technologies.
In addition, many large enterprises are still grappling with the basics—improving foundational areas such as IT asset management, vulnerability, and identity and access management. They will need to ramp up their efforts. Here are the three ways AI is impacting cybersecurity—and what organizations need to know to stay ahead. AI is accelerating the speed of cyberattacks, with breakout times now often under an hour. The ability of hackers to use AI tools—from creating convincing phishing emails, fake websites, and even deepfake videos to injecting malicious prompts or code—allows cybercriminals to craft personalized, realistic messages and methods that bypass... They can do so on an unprecedented scale.
The rapid adoption of generative AI has created an attack surface that dwarfs anything we've seen before. While organizations race to implement AI solutions, many security teams remain dangerously unprepared for the unique threats these technologies introduce. Remember when "Bring Your Own Device" (BYOD) sent security teams scrambling? The sudden influx of personal devices connecting to corporate networks created chaos and vulnerability. Today's AI revolution makes BYOD look like a minor hiccup. From ChatGPT and Google's Gemini to AI assistants embedded in Microsoft 365 and Slack, these tools are now inextricably woven into the fabric of modern business operations.
The challenge? Many security professionals are watching from the sidelines, overwhelmed by the pace of change and the complexity of new threats. This dangerous complacency — what we might call "threat fatigue" — leaves organizations exposed. It's time to cut through the noise and build defenses that actually work. Before addressing threats, we need clarity on what we're protecting. The term "AI" encompasses several critical technologies:
Humanities and Social Sciences Communications volume 12, Article number: 564 (2025) Cite this article There is a strong tendency in prevailing discussions about artificial intelligence (AI) to focus predominantly on human-centered concerns, thereby neglecting the broader impacts of this technology. This paper presents a categorization of AI risks highlighted in public discourse, as reflected in written online media accounts, to provide a background for its primary focus: exploring the dimensions of AI threats that... Particular emphasis is dedicated to the ignored issues of animal welfare and the psychological impacts on humans, the latter of which surprisingly remains inadequately addressed despite the prevalent anthropocentric perspective of the public conversation. Moreover, this work also considers other underexplored dangers of AI development for the environment and, hypothetically, for sentient AI. The methodology of this study is grounded in a manual selection and meticulous, thematic, and discourse analytical manual examination of online articles published in the aftermath of the AI surge following ChatGPT’s launch in...
This qualitative approach is specifically designed to overcome the limitations of automated, surface-level evaluations typically used in media reviews, aiming to provide insights and nuances often missed by the mechanistic and algorithm-driven methods prevalent... Through this detail-oriented investigation, a categorization of the dominant themes in the discourse on AI hazards was developed to identify its overlooked aspects. Stemming from this evaluation, the paper argues for expanding risk assessment frameworks in public thinking to a morally more inclusive approach. It calls for a more comprehensive acknowledgment of the potential harm of AI technology’s progress to non-human animals, the environment, and, more theoretically, artificial agents possibly attaining sentience. Furthermore, it calls for a more balanced allocation of focus among prospective menaces for humans, prioritizing psychological consequences, thereby offering a more sophisticated and capable strategy for tackling the diverse spectrum of perils presented... This paper examines the ongoing public discourse on the risks associated with AI, as reflected in written online media coverage.
Specifically, a classification framework of AI threats consisting of 37 + 1 categories is introduced, derived from a thematic and discourse analysis of how these dangers are portrayed in online media articles. The key purpose of this study is to reveal that the current discussion surrounding AI threats overlooks multiple critical areas: the psychological effects of AI on humans, the dangers posed to non-human animals (referred... The structure of this paper is as follows: subsequent to this introductory section, which aims to illuminate the fundamental ideas to be elaborated on later, the second section presents the theoretical background of the... The fourth section delves into the evaluation of the findings, conveying the principal aim of this study: identifying blind spots within the public discourse and highlighting the necessity to adjust and broaden its focus... Within the domain of anthropocentric perspectives, it is argued that attention must be redirected toward specific elements, particularly the psychological implications. Additionally, it will be maintained that a more inclusive approach to risk assessment is crucial, considering the interests of non-human entities—foremost among these, a vast range of animals—and recognizing them as subjects of moral...
The concluding fifth section will outline the findings and propose a significant shift in discussing AI perils, advocating for a more comprehensive framework that thoroughly addresses the diverse threats posed by AI advancements. Many hold the view that the survival of living organisms, as well as the quality of life that the Earth offers to beings living on it, are of fundamental importance. Therefore, it appears to be an essential task to consider and seek to prevent any circumstances that could possibly threaten the continuation of the existence of the natural world, diminish the living conditions on... This holds true even though a significant portion of society seems to underestimate the cognitive abilities of animals compared to scientific evidence (Leach et al. 2023). Nevertheless, the recognition of sentience across a broad spectrum of animals is increasingly reflected in legal and cultural frameworks.
(Treaty of Lisbon 2007, 49; Animal Welfare Sentience Act (2022), Andrews et al. (2024)) A terrifying new synthetic drug mixture is showing up in U.S. emergency rooms—and doctors say it’s hitting faster and harder than anything in recent years. AI was utilized for research, writing, citation management, and editing. Many of us dismiss a rash as “just allergies,” “just dry skin,” or “probably nothing.”.
But doctors warn there is one rash that can turn deadly in hours, not days — and Americans rarely recognize it until it’s too late. The numbers are staggering: six figures of hunters gone, millions more deer on the landscape, and a DNR scrambling to keep control. Wisconsin’s outdoor identity is slipping away. At a FoodRader-listed pickup site where I volunteer, I often saw the same young guy taking multiple bags of donuts, drinks, and hot food. At first I assumed he was grabbing food for a big family, but later I found him selling those items on Facebook Marketplace, calling them “fresh daily bundles.” He even labeled the pickup spot... I’m not sure whether he’s trying to survive or exploiting a system meant to help others, but everyone else in line felt cheated and frustrated.
A bombshell came crashing into the White House health narrative Monday when longtime cardiologist Jonathan Reiner publicly rejected the official spin on President Trump’s recent MRI, calling the explanation “laughable” and suggesting the whole... Read the article at https://www.fastcompany.com/91434708/ai-algorithms-amplify-extremism.
People Also Search
- The real AI threat is algorithms that 'enrage to engage'
- The real AI threat is algorithms that 'enrage to engage' | Mind Leap ...
- Emerging AI Threats To Navigate In 2025 And Beyond - Forbes
- The real threat from AI - ScienceDirect
- AI is the greatest threat—and defense—in cybersecurity today. Here's why
- Beyond the Hype: A Security Leader's Guide to Real AI Threats and ...
- Focal points and blind spots of human-centered AI: AI risks in written ...
Eric Schwartzman — Really Thoughtful Piece — Thank You For
Eric Schwartzman — Really thoughtful piece — thank you for naming so clearly the real, present danger of outrage-driven algorithms. At #MindLeap, we see the same pattern every day: the near-term threat isn’t AI apocalypse but the quiet erosion of human judgment, attention, and emotional regulation happening right now. If we may add one perspective: outrage is the symptom, but the deeper issue is t...
After 4 Months Of Reporting This Fast Company Story (released
After 4 months of reporting this Fast Company story (released this morning), I can’t say I’m surprised by what I found. But I do feel clearer. Here’s why the world feels so volatile right now: Algorithms push the most emotionally charged content because level-headed POVs command no attention. Politicians and influencers exploit the algos by saying whatever travels farthest. Foreign adversaries amp...
Message Repetition Through Fake Social Proof Normalizes Fringe, Which Then
Message repetition through fake social proof normalizes fringe, which then get mistaken for the truth. And the truth gets emotionally engineered reality by what it appears "everyone" is saying. None of this is speculative. None of it is theoretical. It's why we have the elected officials we do. Why we support the policies we have.
And None Of It Is Limited To One Political Side
And none of it is limited to one political side or the other. Once you understand how the machinery works, it becomes easier to see the difference between human judgment and algorithmic probability, and easier to appreciate what should never be automated. That’s what my new Fast Company piece lays out: The Real AI Threat is Algorithms that Enrage to Engage https://lnkd.in/gWqktAEP How will you res...
Agentic AI Systems Are AI Agents That Aren't Just Responding
Agentic AI systems are AI agents that aren't just responding to prompts or generating content—they're making decisions or executing complex tasks without human oversight (think autonomous vehicles). Since these systems have excessive agency and possess deep access to data, code and functions, they will be a hot target for malicious threat actors. Attackers may induce undesirable behaviors or creat...