Tech Companies Pledged To Protect Elections From Ai Here S How They

Bonisiwe Shabane
-
tech companies pledged to protect elections from ai here s how they

One year ago this week, 27 artificial intelligence companies and social media platforms signed an accord that highlighted how AI-generated disinformation could undermine elections around the world. The signers at a security conference in Munich included Google, Meta, Microsoft, OpenAI, and TikTok. They acknowledged the dangers, stating, “The intentional and undisclosed generation and distribution of Deceptive AI Election content can deceive the public in ways that jeopardize the integrity of electoral processes.” The signatories agreed to eight commitments to mitigate the risks that generative AI poses to elections. Companies pledged to: This analysis assesses how the companies followed through on their commitments, based on their own reporting.

At the time the accord was signed, the companies involved received positive attention for promising to act to ensure that their products would not interfere with elections. While the Brennan Center, too, praised these companies for the accord, we also asked how the public should gauge whether the commitments were anything more than PR window-dressing. Read the Brennan Center’s Agenda to Strengthen Democracy in the Age of AI >> Companies had multiple opportunities to report on their progress over the past year, including through updates on the accord’s official website, responses to a formal inquiry from then-Senate Intelligence Committee Chair Mark Warner (D-VA),... FILE - Meta’s president of global affairs Nick Clegg speaks at the World Economic Forum in Davos, Switzerland, Jan. 18, 2024.

Adobe, Google, Meta, Microsoft, OpenAI, TikTok and other companies are gathering at the Munich Security Conference on Friday to announce a new voluntary framework for how they will respond to AI-generated deepfakes that deliberately... (AP Photo/Markus Schreiber, File) Major technology companies signed a pact Friday to voluntarily adopt “reasonable precautions” to prevent artificial intelligence tools from being used to disrupt democratic elections around the world. Executives from Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI and TikTok gathered at the Munich Security Conference to announce a new framework for how they respond to AI-generated deepfakes that deliberately trick voters. Twelve other companies — including Elon Musk’s X — are also signing on to the accord. “Everybody recognizes that no one tech company, no one government, no one civil society organization is able to deal with the advent of this technology and its possible nefarious use on their own,” said...

The accord is largely symbolic, but targets increasingly realistic AI-generated images, audio and video “that deceptively fake or alter the appearance, voice, or actions of political candidates, election officials, and other key stakeholders in... A coalition of major technology companies committed on Friday to limit the malicious use of deepfakes and other forms of artificial intelligence to manipulate or deceive voters in democratic elections. The AI elections accord, announced at the Munich Security Conference, outlines a series of commitments to make it harder for bad actors to use generative AI, large language models and other AI tools to... Signed by 20 major companies, the document features a who’s-who of technology firms, including OpenAI, Microsoft, Amazon, Meta, TikTok and the social media platform X. It also includes key but lesser-known players in the AI industry, like StabilityAI and ElevenLabs — whose technology has already been implicated in the creation of AI-generated content used to influence voters in New... Other signatories include Adobe and TruePic, two firms that are working on detection and watermarking technologies.

Friday’s agreement commits these companies to supporting the development of tools that can better detect, verify or label media that is synthetically generated or manipulated. They also committed to dedicated assessments of AI models to better understand how they may be leveraged to disrupt elections and to develop enhanced methods to track the distribution of viral AI-generated content on... The signatories committed to labeling AI media where possible while respecting legitimate uses like satire. The agreement marks the most comprehensive effort to date by global tech companies to address the ways in which AI might be used to manipulate elections, and comes on the heels of several incidents... Twenty tech companies working on artificial intelligence said Friday they had signed a “pledge” to try to prevent their software from interfering in elections, including in the United States. The signatories range from tech giants such as Microsoft and Google to a small startup that allows people to make fake voices — the kind of generative-AI product that could be abused in an...

The accord is, in effect, a recognition that the companies’ own products create a lot of risk in a year in which 4 billion people around the world are expected to vote in elections. “Deceptive AI Election content can deceive the public in ways that jeopardize the integrity of electoral processes,” the document reads. The accord is also a recognition that lawmakers around the world haven’t responded very quickly to the swift advancements in generative AI, leaving the tech industry to explore self-regulation. With more than half of the world’s population poised to vote in elections around the world this year, tech leaders, lawmakers and civil society groups are increasingly concerned that artificial intelligence could cause confusion... Now, a group of leading tech companies say they are teaming up to address that threat. More than a dozen tech firms involved in building or using AI technologies pledged on Friday to work together to detect and counter harmful AI content in elections, including deepfakes of political candidates.

Signatories include OpenAI, Google, Meta, Microsoft, TikTok, Adobe and others. The agreement, called the “Tech Accord to Combat Deceptive Use of AI in 2024 Elections,” includes commitments to collaborate on technology to detect misleading AI-generated content and to be transparent with the public about... “AI didn’t create election deception, but we must ensure it doesn’t help deception flourish,” Microsoft President Brad Smith said in a statement at the Munich Security Conference Friday. Related article OpenAI will now let you create videos from verbal cues Tech giants including Microsoft, Meta, Google, Amazon, X, OpenAI and TikTok unveiled an agreement on Friday aimed at mitigating the risk that artificial intelligence will disrupt elections in 2024. The tech industry "accord" takes aim at AI-generated images, video and audio that could deceive voters about candidates, election officials and the voting process.

But it stops short of calling for an outright ban on such content. And while the agreement is a show of unity for platforms with billions of collective users, it largely outlines initiatives that are already underway, such as efforts to detect and label AI-generated content. Fears over how AI could be used to mislead voters and maliciously misrepresent those running for office are escalating in a year that will see millions of people around the world head to the... Apparent AI-generated audio has already been used to impersonate President Biden discouraging Democrats from voting in New Hampshire's January primary and to purportedly show a leading candidate claiming to rig the vote in Slovakia's... "The intentional and undisclosed generation and distribution of Deceptive AI Election content can deceive the public in ways that jeopardize the integrity of electoral processes," the text of the accord says. "We affirm that the protection of electoral integrity and public trust is a shared responsibility and a common good that transcends partisan interests and national borders."

U.S. Sen. Mark Warner (D-VA) leaves the U.S. Capitol on July 11, 2024 in Washington, DC. Warner received commitments of AI monitoring practices from several major tech companies ahead of the 2024 electon. Tierney L.

Cross/Getty Images Leading tech companies have pledged to implement various practices to protect against the influence of artificial intelligence-generated content ahead of election season. A total of nineteen leading tech firms sent response letters to a call for replies Sen. Mark Warner, D-Va., issued back in May, where companies including X, Google, Anthropic, Meta, Microsoft and McAfee provided details about their internal commitments to monitoring their online platforms for AI-augmented content related to the... That’s out of the 24 total companies Warner sent letters to as signatories of the AI Elections Accord established in February at the Munich Tech Conference. “I appreciate the thoughtful engagement from the signatories of the Munich Tech Accord,” Warner said in a press release.

“Their responses indicated promising avenues for collaboration, information-sharing, and standards development, but also illuminated areas for significant improvement.” The content of each company’s letter varied. Leadership from social media site X, formerly Twitter, said that its internal Safety Teams are continuing to monitor the validity of content published on its platform. The agreement is voluntary and doesn’t go as far as a complete ban on AI content in elections. Twenty tech companies developing artificial intelligence (AI) announced on Friday, Feb. 16, their commitment to prevent their software from influencing elections, including in the United States.

The agreement acknowledges that AI products pose a significant risk, especially in a year when around four billion people worldwide are expected to participate in elections. The document highlights concerns about deceptive AI in election content and its potential to mislead the public, posing a threat to the integrity of electoral processes. The agreement also acknowledges that global lawmakers have been slow to respond to the rapid progress of generative AI, leading the tech industry to explore self-regulation. Brad Smith, vice chair and president of Microsoft, voiced his suppor in a statement: The 20 signatories of the pledge are Microsoft, Google, Adobe, Amazon, Anthropic, Arm, ElevenLabs, IBM, Inflection AI, LinkedIn, McAfee, Meta, Nota, OpenAI, Snap, Stability AI, TikTok, TrendMicro, Truepic and X.

People Also Search

One Year Ago This Week, 27 Artificial Intelligence Companies And

One year ago this week, 27 artificial intelligence companies and social media platforms signed an accord that highlighted how AI-generated disinformation could undermine elections around the world. The signers at a security conference in Munich included Google, Meta, Microsoft, OpenAI, and TikTok. They acknowledged the dangers, stating, “The intentional and undisclosed generation and distribution ...

At The Time The Accord Was Signed, The Companies Involved

At the time the accord was signed, the companies involved received positive attention for promising to act to ensure that their products would not interfere with elections. While the Brennan Center, too, praised these companies for the accord, we also asked how the public should gauge whether the commitments were anything more than PR window-dressing. Read the Brennan Center’s Agenda to Strengthen...

Adobe, Google, Meta, Microsoft, OpenAI, TikTok And Other Companies Are

Adobe, Google, Meta, Microsoft, OpenAI, TikTok and other companies are gathering at the Munich Security Conference on Friday to announce a new voluntary framework for how they will respond to AI-generated deepfakes that deliberately... (AP Photo/Markus Schreiber, File) Major technology companies signed a pact Friday to voluntarily adopt “reasonable precautions” to prevent artificial intelligence t...

The Accord Is Largely Symbolic, But Targets Increasingly Realistic AI-generated

The accord is largely symbolic, but targets increasingly realistic AI-generated images, audio and video “that deceptively fake or alter the appearance, voice, or actions of political candidates, election officials, and other key stakeholders in... A coalition of major technology companies committed on Friday to limit the malicious use of deepfakes and other forms of artificial intelligence to mani...

Friday’s Agreement Commits These Companies To Supporting The Development Of

Friday’s agreement commits these companies to supporting the development of tools that can better detect, verify or label media that is synthetically generated or manipulated. They also committed to dedicated assessments of AI models to better understand how they may be leveraged to disrupt elections and to develop enhanced methods to track the distribution of viral AI-generated content on... The ...