Ai Companions The New Frontier Of Disinformation

Bonisiwe Shabane
-
ai companions the new frontier of disinformation

The CPD Blog is intended to stimulate dialog among scholars and practitioners from around the world in the public diplomacy sphere. The opinions represented here are the authors' own and do not necessarily reflect CPD's views. For blogger guidelines, click here. Last week, The Economist published a review of the burgeoning AI companion industry. The companion industry is gaining momentum globally, with individuals either customizing existing platforms like ChatGPT into romantic partners, with specified ages, professions (such as tech executive), and personality traits encompassing wit, dry humor, and... Others turn to AI companion applications that offer friendship, mentorship, or even therapeutic support.

Character.ai, one the most prominent platforms in this space, attracts 20 million monthly users in the United States alone. American users have invested millions of hours engaging with the “Psychologist” bot, seeking guidance on intimacy challenges, depression, anxiety, and workplace exhaustion. According to The Economist, 42% of American high school students reported using AI as a “friend” within the past year. In China, the leading application “Maoxiang” has also attracted tens of millions of users. Major AI platforms, including ChatGPT, have also announced initiatives to cultivate more “personable” products through refined language and tone, while also introducing novel content such as erotica. Research indicates that LLMs (Large Language Models) are already becoming better companions by mimicking human emotions and empathy, thereby strengthening AI-human relationships.

The allure of an AI companion is clear: the AI never forgets a detail, never misses an anniversary, never discourages or offends and is never offline. Certain studies suggest AI companions reduce feelings of loneliness and isolation, while others studies at MIT have found a correlation between intense use of ChatGPT and greater feelings of isolation. Nevertheless, AI companions may represent “the new social.” As I noted in a previous post, studies and news repots assert that social media is becoming less social. Across age groups, users are withdrawing from sharing personal content on social media. The era of selfies, status updates, and location check-ins has ended. When individuals do share, they circulate content among small groups of friends through Instagram stories or WhatsApp groups.

Last week, The Economist published a review of the burgeoning AI companion industry. The companion industry is gaining momentum globally, with individuals either customizing existing platforms like ChatGPT into romantic partners, with specified ages, professions (such as tech executive), and personality traits encompassing wit, dry humour, and... Others turn to AI companion applications that offer friendship, mentorship, or even therapeutic support. Character.ai, one the most prominent platforms in this space, attracts 20 million monthly users in the United States alone. American users have invested millions of hours engaging with the “Psychologist” bot, seeking guidance on intimacy challenges, depression, anxiety, and workplace exhaustion. According to The Economist, 42% of American high school students reported using AI as a “friend” within the past year.

In China, the leading application “Maoxiang” has also attracted tens of millions of users. Major AI platforms, including ChatGPT, have also announced initiatives to cultivate more “personable” products through refined language and tone, while also introducing novel content such as erotica. Research indicates that LLMs (Large Language Models) are already becoming better companions by mimicking humlessan emotions and empathy, thereby strengthening AI-human relationships. The allure of an AI companion is clear: the AI never forgets a detail, never misses an anniversary, never discourages or offends and is never offline. Certain studies suggest AI companions reduce feelings of loneliness and isolation, while others studies at MIT have found a correlation between intense use of ChatGPT greater feelings of isolation. Nevertheless, AI companions may represent “the new social.” As I noted in a previous post, studies and news repots assert that social media is becoming less social.

Across age groups, users are withdrawing from sharing personal content on social media. The era of selfies, status updates, and location check-ins has ended. When individuals do share, they circulate content among small groups of friends through Instagram stories or WhatsApp groups. Social media is thus becoming asocial, with users scrolling feeds to consume information and occupy idle time. Abstract: AI companions are not just synthetic friends—they are fast becoming ideal disinformation delivery systems. Built on predictable code, designed to please, and primarily used by socially isolated individuals, these apps create perfect conditions for manipulation.

This is a low-cost, high-reach threat vector that can remain undetected until it is too late. What begins as emotional dependency can end in stochastic violence, all while appearing harmless. Problem statement: How to prevent radicalisation when it occurs one-on-one, in private, with an AI designed to be agreeable? So what?: Governments and defence alliances, such as NATO, need to stop treating AI companions as fringe technology. They must regulate them, monitor vulnerabilities, and take pre-emptive counter-disinformation policy seriously—before they are weaponised at scale. The concept of disinformation has always been that it catches up with technology; in the ongoing race between sword and shield, disinformation remains at the cutting edge.[1] Rather than reacting to this threat, proactively...

Disinformation’s goal is to degrade civil processes, or manufacture mass unrest, everything from as minor as not voting to terror attacks.[2] The result is a desire not to defend their country or view their... The rise of AI has created new attack vectors and enabled the dissemination of disinformation at unprecedented volume.[3] Dissemination methods range from rudimentary, simple, and easy to defeat to complex set-piece operations.[4] AI disinformation... Disinformation’s goal is to degrade civil processes, or manufacture mass unrest, everything from as minor as not voting to terror attacks. 🚨New Week, New #DigitalDiplomacy Post. This week- Are AI Companions: The Next Battlefield for Disinformation? The Economist's recent deep dive into AI companions reveals a thriving new industry of AI companions in what might be "the new social." Key Takeaways: ✅ Emotional Bonds Create Vulnerability: AI companions offer what...

Users spend hours sharing intimate details, receiving support for life decisions, and forming genuine emotional attachments. This trust creates unprecedented influence potential. ✅ LLMs Are Not Neutral: Current AI companions built on Claude, Gemini, and ChatGPT already reflect their creators' geopolitical perspectives. Ask "Why does America support Ukraine?" and you'll get different answers from US, EU, and Chinese models. These are ideological devices, not objective tools. ✅ A New Vehicle for Disinformation: Imagine Russian-developed AI companions providing genuine emotional support while gradually introducing Kremlin narratives about Ukraine.

Or Chinese companions subtly promoting alternative worldviews on Taiwan. The infrastructure already exists as many AI companions are based on existing LLMs such as ChatGPT and Claude ✅ A Hermetically Sealed Ecosystem: Unlike social media, AI companion conversations are closed systems. External fact-checkers can't penetrate them. Traditional pre-bunking and debunking methods become ineffective when users have bared their souls to an AI they've come to trust deeply. ✅ A Rare Window for Action: The AI companion landscape is still forming. Weaponization hasn't yet occurred at scale.

Governments can forge alliances with academics and tech companies now to address this threat before it materializes. The Bottom Line: We've seen this pattern before with social media—governments acting only after threats materialized, always playing catch-up. With AI companions, there is a rare opportunity to get ahead of the curve. Read more here 👉 https://lnkd.in/dj3XEzDz The Cambridge Analytica affair showed how psychological profiling and targeted ads could influence elections. But as technology evolves, the tools of persuasion are no longer just in the hands of advertisers — they’re in the hands of machines.

Artificial intelligence has made it possible to automate disinformation, blurring the line between truth and fabrication. The same algorithms that recommend movies or products can now generate entire fake news campaigns — complete with fabricated images, videos, and social media posts. The result is a digital ecosystem where reality competes with synthetic content. Traditional disinformation campaigns required human effort — writing posts, creating memes, or maintaining fake accounts. AI has revolutionized this process by automating every step: Together, these technologies form an ecosystem of machine-generated manipulation capable of influencing opinions at scale and speed previously unimaginable.

Among all AI tools, deepfakes are perhaps the most dangerous. By training on thousands of images and videos, neural networks can synthesize realistic footage of people saying or doing things they never did. In a new report, Freedom House documents the ways governments are now using the tech to amplify censorship. Artificial intelligence has turbocharged state efforts to crack down on internet freedoms over the past year. Governments and political actors around the world, in both democracies and autocracies, are using AI to generate texts, images, and video to manipulate public opinion in their favor and to automatically censor critical online... In a new report released by Freedom House, a human rights advocacy group, researchers documented the use of generative AI in 16 countries “to sow doubt, smear opponents, or influence public debate.”

The annual report, Freedom on the Net, scores and ranks countries according to their relative degree of internet freedom, as measured by a host of factors like internet shutdowns, laws limiting online expression, and... The 2023 edition, released on October 4, found that global internet freedom declined for the 13th consecutive year, driven in part by the proliferation of artificial intelligence. “Internet freedom is at an all-time low, and advances in AI are actually making this crisis even worse,” says Allie Funk, a researcher on the report. Funk says one of their most important findings this year has to do with changes in the way governments use AI, though we are just beginning to learn how the technology is boosting digital... Published online by Cambridge University Press: 25 November 2021 Artificial intelligence (AI) systems are playing an overarching role in the disinformation phenomenon our world is currently facing.

Such systems boost the problem not only by increasing opportunities to create realistic AI-generated fake content, but also, and essentially, by facilitating the dissemination of disinformation to a targeted audience and at scale by... This situation entails multiple ethical and human rights concerns, in particular regarding human dignity, autonomy, democracy, and peace. In reaction, other AI systems are developed to detect and moderate disinformation online. Such systems do not escape from ethical and human rights concerns either, especially regarding freedom of expression and information. Having originally started with ascending co-regulation, the European Union (EU) is now heading toward descending co-regulation of the phenomenon. In particular, the Digital Services Act proposal provides for transparency obligations and external audit for very large online platforms’ recommender systems and content moderation.

While with this proposal, the Commission focusses on the regulation of content considered as problematic, the EU Parliament and the EU Council call for enhancing access to trustworthy content. In light of our study, we stress that the disinformation problem is mainly caused by the business model of the web that is based on advertising revenues, and that adapting this model would reduce... We also observe that while AI systems are inappropriate to moderate disinformation content online, and even to detect such content, they may be more appropriate to counter the manipulation of the digital ecosystem. This study aims at identifying the right approach to tackle the disinformation problem online with due consideration for ethical values, fundamental rights and freedoms, and democracy. While moderating content as such and using AI systems to that end may be particularly problematic regarding freedom of expression and information, we recommend countering the malicious use of technologies online to manipulate individuals. As considering the main cause of the effective manipulation of individuals online is paramount, the business model of the web should be on the radar screen of public regulation more than content moderation.

Furthermore, we do support a vibrant, independent, and pluralistic media landscape with investigative journalists following ethical rules. Manipulation of truth is a recurring phenomenon throughout history.Footnote 1 Damnatio memoriae, namely the attempted erasure of people from history, is an example of purposive distortion of reality that was already practiced in Ancient... Nevertheless, owing to the rapid advances in information and communication technologies (ICT) as well as their increasing pervasiveness, disingenuous information can now be produced easily and in a realistic format, and its dissemination to... The consequences are serious with far-reaching implications. For instance, the media ecosystem has been leveraged to influence citizens’ opinion and voting decisions related to the 2016 US presidential electionFootnote 2 and the 2016 UK referendum on leaving the European Union (EU)... In Myanmar, Facebook has been a useful instrument for those seeking to spread hate against Rohingya Muslims (Human Rights Council, 2018, para 74).Footnote 3 In India, rumors on WhatsApp resulted in several murders (Dixit...

People Also Search

The CPD Blog Is Intended To Stimulate Dialog Among Scholars

The CPD Blog is intended to stimulate dialog among scholars and practitioners from around the world in the public diplomacy sphere. The opinions represented here are the authors' own and do not necessarily reflect CPD's views. For blogger guidelines, click here. Last week, The Economist published a review of the burgeoning AI companion industry. The companion industry is gaining momentum globally,...

Character.ai, One The Most Prominent Platforms In This Space, Attracts

Character.ai, one the most prominent platforms in this space, attracts 20 million monthly users in the United States alone. American users have invested millions of hours engaging with the “Psychologist” bot, seeking guidance on intimacy challenges, depression, anxiety, and workplace exhaustion. According to The Economist, 42% of American high school students reported using AI as a “friend” within...

The Allure Of An AI Companion Is Clear: The AI

The allure of an AI companion is clear: the AI never forgets a detail, never misses an anniversary, never discourages or offends and is never offline. Certain studies suggest AI companions reduce feelings of loneliness and isolation, while others studies at MIT have found a correlation between intense use of ChatGPT and greater feelings of isolation. Nevertheless, AI companions may represent “the ...

Last Week, The Economist Published A Review Of The Burgeoning

Last week, The Economist published a review of the burgeoning AI companion industry. The companion industry is gaining momentum globally, with individuals either customizing existing platforms like ChatGPT into romantic partners, with specified ages, professions (such as tech executive), and personality traits encompassing wit, dry humour, and... Others turn to AI companion applications that offer...

In China, The Leading Application “Maoxiang” Has Also Attracted Tens

In China, the leading application “Maoxiang” has also attracted tens of millions of users. Major AI platforms, including ChatGPT, have also announced initiatives to cultivate more “personable” products through refined language and tone, while also introducing novel content such as erotica. Research indicates that LLMs (Large Language Models) are already becoming better companions by mimicking huml...