Ai Companions The New Frontier Of Disinfromation

Bonisiwe Shabane
-
ai companions the new frontier of disinfromation

The CPD Blog is intended to stimulate dialog among scholars and practitioners from around the world in the public diplomacy sphere. The opinions represented here are the authors' own and do not necessarily reflect CPD's views. For blogger guidelines, click here. Last week, The Economist published a review of the burgeoning AI companion industry. The companion industry is gaining momentum globally, with individuals either customizing existing platforms like ChatGPT into romantic partners, with specified ages, professions (such as tech executive), and personality traits encompassing wit, dry humor, and... Others turn to AI companion applications that offer friendship, mentorship, or even therapeutic support.

Character.ai, one the most prominent platforms in this space, attracts 20 million monthly users in the United States alone. American users have invested millions of hours engaging with the “Psychologist” bot, seeking guidance on intimacy challenges, depression, anxiety, and workplace exhaustion. According to The Economist, 42% of American high school students reported using AI as a “friend” within the past year. In China, the leading application “Maoxiang” has also attracted tens of millions of users. Major AI platforms, including ChatGPT, have also announced initiatives to cultivate more “personable” products through refined language and tone, while also introducing novel content such as erotica. Research indicates that LLMs (Large Language Models) are already becoming better companions by mimicking human emotions and empathy, thereby strengthening AI-human relationships.

The allure of an AI companion is clear: the AI never forgets a detail, never misses an anniversary, never discourages or offends and is never offline. Certain studies suggest AI companions reduce feelings of loneliness and isolation, while others studies at MIT have found a correlation between intense use of ChatGPT and greater feelings of isolation. Nevertheless, AI companions may represent “the new social.” As I noted in a previous post, studies and news repots assert that social media is becoming less social. Across age groups, users are withdrawing from sharing personal content on social media. The era of selfies, status updates, and location check-ins has ended. When individuals do share, they circulate content among small groups of friends through Instagram stories or WhatsApp groups.

Abstract: AI companions are not just synthetic friends—they are fast becoming ideal disinformation delivery systems. Built on predictable code, designed to please, and primarily used by socially isolated individuals, these apps create perfect conditions for manipulation. This is a low-cost, high-reach threat vector that can remain undetected until it is too late. What begins as emotional dependency can end in stochastic violence, all while appearing harmless. Problem statement: How to prevent radicalisation when it occurs one-on-one, in private, with an AI designed to be agreeable? So what?: Governments and defence alliances, such as NATO, need to stop treating AI companions as fringe technology.

They must regulate them, monitor vulnerabilities, and take pre-emptive counter-disinformation policy seriously—before they are weaponised at scale. The concept of disinformation has always been that it catches up with technology; in the ongoing race between sword and shield, disinformation remains at the cutting edge.[1] Rather than reacting to this threat, proactively... Disinformation’s goal is to degrade civil processes, or manufacture mass unrest, everything from as minor as not voting to terror attacks.[2] The result is a desire not to defend their country or view their... The rise of AI has created new attack vectors and enabled the dissemination of disinformation at unprecedented volume.[3] Dissemination methods range from rudimentary, simple, and easy to defeat to complex set-piece operations.[4] AI disinformation... Disinformation’s goal is to degrade civil processes, or manufacture mass unrest, everything from as minor as not voting to terror attacks. Designed to be the perfect person—always available, never critical—AI companions are hooking people deeper than social media ever could.

They make the attention economy look like a relic. On Tuesday, California state senator Steve Padilla will make an appearance with Megan Garcia, the mother of a Florida teen who killed himself following a relationship with an AI companion that Garcia alleges contributed... The two will announce a new bill that would force the tech companies behind such AI companions to implement more safeguards to protect children. They’ll join other efforts around the country, including a similar bill from California State Assembly member Rebecca Bauer-Kahan that would ban AI companions for anyone younger than 16 years old, and a bill in... You might think that such AI companionship bots—AI models with distinct “personalities” that can learn about you and act as a friend, lover, cheerleader, or more—appeal only to a fringe few, but that couldn’t... A new research paper aimed at making such companions safer, by authors from Google DeepMind, the Oxford Internet Institute, and others, lays this bare: Character.AI, the platform being sued by Garcia, says it receives...

Interactions with these companions last four times longer than the average time spent interacting with ChatGPT. One companion site I wrote about, which was hosting sexually charged conversations with bots imitating underage celebrities, told me its active users averaged more than two hours per day conversing with bots, and that... 🚨New Week, New #DigitalDiplomacy Post. This week- Are AI Companions: The Next Battlefield for Disinformation? The Economist's recent deep dive into AI companions reveals a thriving new industry of AI companions in what might be "the new social." Key Takeaways: ✅ Emotional Bonds Create Vulnerability: AI companions offer what... Users spend hours sharing intimate details, receiving support for life decisions, and forming genuine emotional attachments.

This trust creates unprecedented influence potential. ✅ LLMs Are Not Neutral: Current AI companions built on Claude, Gemini, and ChatGPT already reflect their creators' geopolitical perspectives. Ask "Why does America support Ukraine?" and you'll get different answers from US, EU, and Chinese models. These are ideological devices, not objective tools. ✅ A New Vehicle for Disinformation: Imagine Russian-developed AI companions providing genuine emotional support while gradually introducing Kremlin narratives about Ukraine. Or Chinese companions subtly promoting alternative worldviews on Taiwan.

The infrastructure already exists as many AI companions are based on existing LLMs such as ChatGPT and Claude ✅ A Hermetically Sealed Ecosystem: Unlike social media, AI companion conversations are closed systems. External fact-checkers can't penetrate them. Traditional pre-bunking and debunking methods become ineffective when users have bared their souls to an AI they've come to trust deeply. ✅ A Rare Window for Action: The AI companion landscape is still forming. Weaponization hasn't yet occurred at scale. Governments can forge alliances with academics and tech companies now to address this threat before it materializes.

The Bottom Line: We've seen this pattern before with social media—governments acting only after threats materialized, always playing catch-up. With AI companions, there is a rare opportunity to get ahead of the curve. Read more here 👉 https://lnkd.in/dj3XEzDz The Cambridge Analytica affair showed how psychological profiling and targeted ads could influence elections. But as technology evolves, the tools of persuasion are no longer just in the hands of advertisers — they’re in the hands of machines. Artificial intelligence has made it possible to automate disinformation, blurring the line between truth and fabrication.

The same algorithms that recommend movies or products can now generate entire fake news campaigns — complete with fabricated images, videos, and social media posts. The result is a digital ecosystem where reality competes with synthetic content. Traditional disinformation campaigns required human effort — writing posts, creating memes, or maintaining fake accounts. AI has revolutionized this process by automating every step: Together, these technologies form an ecosystem of machine-generated manipulation capable of influencing opinions at scale and speed previously unimaginable. Among all AI tools, deepfakes are perhaps the most dangerous.

By training on thousands of images and videos, neural networks can synthesize realistic footage of people saying or doing things they never did. Daniel You, University of Sydney; Micah Boerma, University of Southern Queensland, and Yuen Siew Koo, Macquarie University Liz Spry, Deakin University and Craig Olsson, Deakin University Sandra Peter, University of Sydney; Jevin West, University of Washington, and Kai Riemer, University of Sydney Raffaele F Ciriello, University of Sydney Anna Mae Duane, University of Connecticut

Most AI products today are built for scale, speed, and operational efficiency. But in fields where emotional nuance, trust, and cultural relevance matter (e.g., mental health), transactional bots and scripted assistants often fall short. Recent research reveals a growing hunger for deeper AI engagement: The AI companion app market now serves more than 52 million users worldwide, with young adults making up 65% of the user base. Studies show users are most satisfied when AI acts as a guide—listening, adapting, and empowering—rather than simply dictating solutions. Features like adaptability, customization, and emotional attunement are now key drivers of satisfaction and adoption. The next frontier of AI is defined by digital companions that are purpose-built, co-designed with domain experts, and tuned to human emotion.

These systems don’t just answer questions, they foster self-efficacy, build trust, and offer culturally relevant support around the clock. We’re not the only ones who see this coming. According to a May 2025 report, the global AI Companion market is projected to grow at over 30% annually for the next six years, driven by demand across education, healthcare, home settings, and public... While many companies are building general-purpose social bots or wellness apps, the most impactful solutions are those that are verticalized and mission-driven, designed in partnership with real-world institutions. This approach enables smarter onboarding, greater trust, and higher impact for the communities served.

People Also Search

The CPD Blog Is Intended To Stimulate Dialog Among Scholars

The CPD Blog is intended to stimulate dialog among scholars and practitioners from around the world in the public diplomacy sphere. The opinions represented here are the authors' own and do not necessarily reflect CPD's views. For blogger guidelines, click here. Last week, The Economist published a review of the burgeoning AI companion industry. The companion industry is gaining momentum globally,...

Character.ai, One The Most Prominent Platforms In This Space, Attracts

Character.ai, one the most prominent platforms in this space, attracts 20 million monthly users in the United States alone. American users have invested millions of hours engaging with the “Psychologist” bot, seeking guidance on intimacy challenges, depression, anxiety, and workplace exhaustion. According to The Economist, 42% of American high school students reported using AI as a “friend” within...

The Allure Of An AI Companion Is Clear: The AI

The allure of an AI companion is clear: the AI never forgets a detail, never misses an anniversary, never discourages or offends and is never offline. Certain studies suggest AI companions reduce feelings of loneliness and isolation, while others studies at MIT have found a correlation between intense use of ChatGPT and greater feelings of isolation. Nevertheless, AI companions may represent “the ...

Abstract: AI Companions Are Not Just Synthetic Friends—they Are Fast

Abstract: AI companions are not just synthetic friends—they are fast becoming ideal disinformation delivery systems. Built on predictable code, designed to please, and primarily used by socially isolated individuals, these apps create perfect conditions for manipulation. This is a low-cost, high-reach threat vector that can remain undetected until it is too late. What begins as emotional dependenc...

They Must Regulate Them, Monitor Vulnerabilities, And Take Pre-emptive Counter-disinformation

They must regulate them, monitor vulnerabilities, and take pre-emptive counter-disinformation policy seriously—before they are weaponised at scale. The concept of disinformation has always been that it catches up with technology; in the ongoing race between sword and shield, disinformation remains at the cutting edge.[1] Rather than reacting to this threat, proactively... Disinformation’s goal is ...