Leading Tech Firms Pledge To Address Election Risks Posed By Ai

Bonisiwe Shabane
-
leading tech firms pledge to address election risks posed by ai

With more than half of the world’s population poised to vote in elections around the world this year, tech leaders, lawmakers and civil society groups are increasingly concerned that artificial intelligence could cause confusion... Now, a group of leading tech companies say they are teaming up to address that threat. More than a dozen tech firms involved in building or using AI technologies pledged on Friday to work together to detect and counter harmful AI content in elections, including deepfakes of political candidates. Signatories include OpenAI, Google, Meta, Microsoft, TikTok, Adobe and others. The agreement, called the “Tech Accord to Combat Deceptive Use of AI in 2024 Elections,” includes commitments to collaborate on technology to detect misleading AI-generated content and to be transparent with the public about... “AI didn’t create election deception, but we must ensure it doesn’t help deception flourish,” Microsoft President Brad Smith said in a statement at the Munich Security Conference Friday.

Related article OpenAI will now let you create videos from verbal cues One year ago this week, 27 artificial intelligence companies and social media platforms signed an accord that highlighted how AI-generated disinformation could undermine elections around the world. The signers at a security conference in Munich included Google, Meta, Microsoft, OpenAI, and TikTok. They acknowledged the dangers, stating, “The intentional and undisclosed generation and distribution of Deceptive AI Election content can deceive the public in ways that jeopardize the integrity of electoral processes.” The signatories agreed to eight commitments to mitigate the risks that generative AI poses to elections. Companies pledged to:

This analysis assesses how the companies followed through on their commitments, based on their own reporting. At the time the accord was signed, the companies involved received positive attention for promising to act to ensure that their products would not interfere with elections. While the Brennan Center, too, praised these companies for the accord, we also asked how the public should gauge whether the commitments were anything more than PR window-dressing. Read the Brennan Center’s Agenda to Strengthen Democracy in the Age of AI >> Companies had multiple opportunities to report on their progress over the past year, including through updates on the accord’s official website, responses to a formal inquiry from then-Senate Intelligence Committee Chair Mark Warner (D-VA),... Tech giants including Microsoft, Meta, Google, Amazon, X, OpenAI and TikTok unveiled an agreement on Friday aimed at mitigating the risk that artificial intelligence will disrupt elections in 2024.

The tech industry "accord" takes aim at AI-generated images, video and audio that could deceive voters about candidates, election officials and the voting process. But it stops short of calling for an outright ban on such content. And while the agreement is a show of unity for platforms with billions of collective users, it largely outlines initiatives that are already underway, such as efforts to detect and label AI-generated content. Fears over how AI could be used to mislead voters and maliciously misrepresent those running for office are escalating in a year that will see millions of people around the world head to the... Apparent AI-generated audio has already been used to impersonate President Biden discouraging Democrats from voting in New Hampshire's January primary and to purportedly show a leading candidate claiming to rig the vote in Slovakia's... "The intentional and undisclosed generation and distribution of Deceptive AI Election content can deceive the public in ways that jeopardize the integrity of electoral processes," the text of the accord says.

"We affirm that the protection of electoral integrity and public trust is a shared responsibility and a common good that transcends partisan interests and national borders." A coalition of major technology companies committed on Friday to limit the malicious use of deepfakes and other forms of artificial intelligence to manipulate or deceive voters in democratic elections. The AI elections accord, announced at the Munich Security Conference, outlines a series of commitments to make it harder for bad actors to use generative AI, large language models and other AI tools to... Signed by 20 major companies, the document features a who’s-who of technology firms, including OpenAI, Microsoft, Amazon, Meta, TikTok and the social media platform X. It also includes key but lesser-known players in the AI industry, like StabilityAI and ElevenLabs — whose technology has already been implicated in the creation of AI-generated content used to influence voters in New... Other signatories include Adobe and TruePic, two firms that are working on detection and watermarking technologies.

Friday’s agreement commits these companies to supporting the development of tools that can better detect, verify or label media that is synthetically generated or manipulated. They also committed to dedicated assessments of AI models to better understand how they may be leveraged to disrupt elections and to develop enhanced methods to track the distribution of viral AI-generated content on... The signatories committed to labeling AI media where possible while respecting legitimate uses like satire. The agreement marks the most comprehensive effort to date by global tech companies to address the ways in which AI might be used to manipulate elections, and comes on the heels of several incidents... Leading technology corporations such as Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI and TikTok convened at the Munich Security Conference on Friday to announce a voluntary commitment aimed at safeguarding democratic elections from the... The initiative, which was joined by 12 additional companies such as Elon Musk’s X, introduces a framework designed to address the challenge posed by AI-generated deepfakes that could deceive voters.

The framework outlines a comprehensive strategy to address the proliferation of deceptive AI election content. This type of content includes AI-generated audio, video and images designed to misleadingly replicate or alter political figures’ appearances, voices or actions, as well as disseminate false information about voting processes. The framework’s scope focuses on managing risks associated with such content on publicly accessible platforms and foundational models. It excludes applications meant for research or enterprise due to their different risk profiles and mitigation strategies. The framework further acknowledges that the deceptive use of AI in elections is only one aspect of a broader spectrum of threats to electoral integrity. Beyond those concerns, there are also concerns over traditional misinformation tactics and cybersecurity vulnerabilities.

It calls for continuous, multifaceted efforts to address these threats comprehensively, beyond just the scope of AI-generated misinformation. Highlighting AI’s potential as a defensive tool, the framework points out its utility in enabling the rapid detection of deceptive campaigns, enhancing consistency across languages, and cost-effectively scaling defense mechanisms. The framework also advocates for a whole-of-society approach, urging collaboration among technology companies, governments, civil society and the electorate to maintain electoral integrity and public trust. It frames the protection of the democratic process as a shared responsibility that transcends partisan interests and national boundaries. By outlining seven principal goals, the framework emphasizes the importance of proactive and comprehensive measures to prevent, detect and respond to deceptive AI election content, enhance public awareness, and foster resilience through education and... To achieve these objectives, the framework details specific commitments for signatories through 2024.

These commitments include developing technologies to identify and mitigate the risks posed by deceptive AI election content, such as content authentication and provenance technology. Signatories are also expected to assess AI models for potential misuse, detect and manage deceptive content on their platforms, and build cross-industry resilience by sharing best practices and technical tools. Transparency in addressing deceptive content and engagement with a diverse range of stakeholders are highlighted as critical components of the framework. The aim is to inform technology development and foster public awareness about the challenges posed by AI in elections. Twenty prominent technology companies, including Google, Microsoft, IBM, Meta, and OpenAI, signed an accord today agreeing to take concrete steps to prevent the spread of deceptive AI-generated content aimed at interfering with elections taking... The "Tech Accord to Combat Deceptive Use of AI in 2024 Elections" was announced at the annual Munich Security Conference.

Signatories pledged to work together on developing tools to detect and address online distribution of fabricated audio, video and images related to elections. The accord specifically focuses on AI-generated content that seeks to deceptively alter the appearance or words of political candidates and provides false voting information to deceive citizens. This type of manipulated media, often called "deepfakes," presents a threat to election integrity around the world, according to the companies. "Elections are the beating heart of democracies. The Tech Accord to Combat Deceptive Use of AI in 2024 elections is a crucial step in advancing election integrity, increasing societal resilience, and creating trustworthy tech practices," said Ambassador Dr. Christoph Heusgen, Chairman of the Munich Security Conference, in a statement.

As part of the accord, companies agreed to eight commitments including assessing AI systems that could enable election deception campaigns, seeking to detect deepfakes on their platforms, providing transparency around policies, and supporting public... The agreement is voluntary and doesn’t go as far as a complete ban on AI content in elections. Twenty tech companies developing artificial intelligence (AI) announced on Friday, Feb. 16, their commitment to prevent their software from influencing elections, including in the United States. The agreement acknowledges that AI products pose a significant risk, especially in a year when around four billion people worldwide are expected to participate in elections. The document highlights concerns about deceptive AI in election content and its potential to mislead the public, posing a threat to the integrity of electoral processes.

The agreement also acknowledges that global lawmakers have been slow to respond to the rapid progress of generative AI, leading the tech industry to explore self-regulation. Brad Smith, vice chair and president of Microsoft, voiced his suppor in a statement: The 20 signatories of the pledge are Microsoft, Google, Adobe, Amazon, Anthropic, Arm, ElevenLabs, IBM, Inflection AI, LinkedIn, McAfee, Meta, Nota, OpenAI, Snap, Stability AI, TikTok, TrendMicro, Truepic and X. With more than half of the world’s population poised to vote in elections around the world this year, tech leaders, lawmakers and civil society groups are increasingly concerned that artificial intelligence could cause confusion... Now, a group of leading tech companies say they are teaming up to address that threat. More than a dozen tech firms involved in building or using AI technologies pledged on Friday to work together to detect and counter harmful AI content in elections, including deepfakes of political candidates.

Signatories include OpenAI, Google, Meta, Microsoft, TikTok, Adobe and others. The agreement, called the “Tech Accord to Combat Deceptive Use of AI in 2024 Elections,” includes commitments to collaborate on technology to detect misleading AI-generated content and to be transparent with the public about... “AI didn’t create election deception, but we must ensure it doesn’t help deception flourish,” Microsoft President Brad Smith said in a statement at the Munich Security Conference Friday. Tech companies generally have a less-than-stellar record of self-regulation and enforcing their own policies. But the agreement comes as regulators continue to lag on creating guardrails for rapidly advancing AI technologies.

People Also Search

With More Than Half Of The World’s Population Poised To

With more than half of the world’s population poised to vote in elections around the world this year, tech leaders, lawmakers and civil society groups are increasingly concerned that artificial intelligence could cause confusion... Now, a group of leading tech companies say they are teaming up to address that threat. More than a dozen tech firms involved in building or using AI technologies pledge...

Related Article OpenAI Will Now Let You Create Videos From

Related article OpenAI will now let you create videos from verbal cues One year ago this week, 27 artificial intelligence companies and social media platforms signed an accord that highlighted how AI-generated disinformation could undermine elections around the world. The signers at a security conference in Munich included Google, Meta, Microsoft, OpenAI, and TikTok. They acknowledged the dangers,...

This Analysis Assesses How The Companies Followed Through On Their

This analysis assesses how the companies followed through on their commitments, based on their own reporting. At the time the accord was signed, the companies involved received positive attention for promising to act to ensure that their products would not interfere with elections. While the Brennan Center, too, praised these companies for the accord, we also asked how the public should gauge whet...

The Tech Industry "accord" Takes Aim At AI-generated Images, Video

The tech industry "accord" takes aim at AI-generated images, video and audio that could deceive voters about candidates, election officials and the voting process. But it stops short of calling for an outright ban on such content. And while the agreement is a show of unity for platforms with billions of collective users, it largely outlines initiatives that are already underway, such as efforts to...

"We Affirm That The Protection Of Electoral Integrity And Public

"We affirm that the protection of electoral integrity and public trust is a shared responsibility and a common good that transcends partisan interests and national borders." A coalition of major technology companies committed on Friday to limit the malicious use of deepfakes and other forms of artificial intelligence to manipulate or deceive voters in democratic elections. The AI elections accord,...