Tech Giants Pledge To Curb Ai Election Interference Cointelegraph

Bonisiwe Shabane
-
tech giants pledge to curb ai election interference cointelegraph

The agreement is voluntary and doesn’t go as far as a complete ban on AI content in elections. Twenty tech companies developing artificial intelligence (AI) announced on Friday, Feb. 16, their commitment to prevent their software from influencing elections, including in the United States. The agreement acknowledges that AI products pose a significant risk, especially in a year when around four billion people worldwide are expected to participate in elections. The document highlights concerns about deceptive AI in election content and its potential to mislead the public, posing a threat to the integrity of electoral processes. The agreement also acknowledges that global lawmakers have been slow to respond to the rapid progress of generative AI, leading the tech industry to explore self-regulation.

Brad Smith, vice chair and president of Microsoft, voiced his suppor in a statement: The 20 signatories of the pledge are Microsoft, Google, Adobe, Amazon, Anthropic, Arm, ElevenLabs, IBM, Inflection AI, LinkedIn, McAfee, Meta, Nota, OpenAI, Snap, Stability AI, TikTok, TrendMicro, Truepic and X. One year ago this week, 27 artificial intelligence companies and social media platforms signed an accord that highlighted how AI-generated disinformation could undermine elections around the world. The signers at a security conference in Munich included Google, Meta, Microsoft, OpenAI, and TikTok. They acknowledged the dangers, stating, “The intentional and undisclosed generation and distribution of Deceptive AI Election content can deceive the public in ways that jeopardize the integrity of electoral processes.” The signatories agreed to eight commitments to mitigate the risks that generative AI poses to elections.

Companies pledged to: This analysis assesses how the companies followed through on their commitments, based on their own reporting. At the time the accord was signed, the companies involved received positive attention for promising to act to ensure that their products would not interfere with elections. While the Brennan Center, too, praised these companies for the accord, we also asked how the public should gauge whether the commitments were anything more than PR window-dressing. Read the Brennan Center’s Agenda to Strengthen Democracy in the Age of AI >> Companies had multiple opportunities to report on their progress over the past year, including through updates on the accord’s official website, responses to a formal inquiry from then-Senate Intelligence Committee Chair Mark Warner (D-VA),...

Twenty tech companies developing artificial intelligence (AI) on Friday, Feb. 16, announced their commitment to prevent their software from influencing elections, including those in the United States. The agreement acknowledges that the companies’ products pose a significant risk, especially in a year when around 4 billion people worldwide are expected to participate in elections. The document highlights concerns about deceptive AI election content and its potential to mislead the public, posing a threat to the integrity of electoral processes. The agreement also acknowledges that global lawmakers have been slow to respond to the rapid progress in generative AI, leading the tech industry to explore self-regulation. Brad Smith, vice chair and president of Microsoft, supported this in a statement:

The 20 signatories of the pledge include tech giants like Microsoft, Google, Adobe, Amazon, Anthropic, Arm, ElevenLabs, IBM, Inflection AI, LinkedIn, McAfee, Meta, Nota, OpenAI, Snap, Stability AI, TikTok, TrendMicro, Truepic and X. However, the accord is voluntary and doesn’t go as far as a complete ban on AI content in elections. The 1,500-word document outlines eight steps the companies commit to taking this year. These steps involve creating tools to differentiate AI-generated images from genuine content and ensuring transparency with the public about significant developments. All MRS websites use cookies to help us improve our services. Any data collected is anonymised.

If you continue using this site without accepting cookies you may experience some performance issues. Read about our cookies here. On Friday ( 16th February), the firms signed the the ‘Tech Accord to Combat Deceptive Use of AI in 2024 Elections’ at the Munich Security Conference. The accord is a set of commitments to deploy technology countering ‘harmful AI-generated content meant to deceive voters’. In 2024, four billion people in over 40 countries around the world are expected to vote. A paper published in PNAS Nexus in January projected that disinformation campaigns will use generative AI, and that bad actor AI attacks will occur almost daily by mid-2024.

Another study, published in Science in 2023, found that AI-generated disinformation was more ‘compelling’ than that created by humans, and that humans cannot distinguish between human and AI-generated disinformation. A group of 20 tech companies announced on Friday they have agreed to work together to prevent deceptive artificial-intelligence content from interfering with elections across the globe this year. The rapid growth of generative artificial intelligence (AI), which can create text, images and video in seconds in response to prompts, has heightened fears that the new technology could be used to sway major... Signatories of the tech accord, which was announced at the Munich Security Conference, include companies that are building generative AI models used to create content, including OpenAI, Microsoft and Adobe. Other signatories include social media platforms that will face the challenge of keeping harmful content off their sites, such as Meta Platforms, TikTok and X, formerly known as Twitter. The agreement includes commitments to collaborate on developing tools for detecting misleading AI-generated images, video and audio, creating public awareness campaigns to educate voters on deceptive content and taking action on such content on...

Technology to identify AI-generated content or certify its origin could include watermarking or embedding metadata, the companies said. Twenty tech companies developing artificial intelligence (AI) announced on Friday, Feb. 16, their commitment to prevent their software from influencing elections, including in the United States. The agreement acknowledges that AI products pose a significant risk, especially in a year when around four billion people worldwide are expected to participate in elections. The document highlights concerns about deceptive AI in election content and its potential to mislead the public, posing a threat to the integrity of electoral processes. The agreement also acknowledges that global lawmakers have been slow to respond to the rapid progress of generative AI, leading the tech industry to explore self-regulation.

Brad Smith, vice chair and president of Microsoft, voiced his suppor in a statement: The 20 signatories of the pledge are Microsoft, Google, Adobe, Amazon, Anthropic, Arm, ElevenLabs, IBM, Inflection AI, LinkedIn, McAfee, Meta, Nota, OpenAI, Snap, Stability AI, TikTok, TrendMicro, Truepic and X. This is not a drill: We are in one of the most consequential election years in recent memory. Social-media companies need to step up to guard against the harms of #AI. We could not find any results for: Make sure your spelling is correct or try broadening your search. It looks like you aren't logged in.Click the button below to log in and view your recent history.

Connect with traders and investors in our Follow Feed community. Explore comprehensive options data and use advanced filters with Options Flow. Organize and monitor your stock and asset watchlist with Monitor.

People Also Search

The Agreement Is Voluntary And Doesn’t Go As Far As

The agreement is voluntary and doesn’t go as far as a complete ban on AI content in elections. Twenty tech companies developing artificial intelligence (AI) announced on Friday, Feb. 16, their commitment to prevent their software from influencing elections, including in the United States. The agreement acknowledges that AI products pose a significant risk, especially in a year when around four bil...

Brad Smith, Vice Chair And President Of Microsoft, Voiced His

Brad Smith, vice chair and president of Microsoft, voiced his suppor in a statement: The 20 signatories of the pledge are Microsoft, Google, Adobe, Amazon, Anthropic, Arm, ElevenLabs, IBM, Inflection AI, LinkedIn, McAfee, Meta, Nota, OpenAI, Snap, Stability AI, TikTok, TrendMicro, Truepic and X. One year ago this week, 27 artificial intelligence companies and social media platforms signed an accor...

Companies Pledged To: This Analysis Assesses How The Companies Followed

Companies pledged to: This analysis assesses how the companies followed through on their commitments, based on their own reporting. At the time the accord was signed, the companies involved received positive attention for promising to act to ensure that their products would not interfere with elections. While the Brennan Center, too, praised these companies for the accord, we also asked how the pu...

Twenty Tech Companies Developing Artificial Intelligence (AI) On Friday, Feb.

Twenty tech companies developing artificial intelligence (AI) on Friday, Feb. 16, announced their commitment to prevent their software from influencing elections, including those in the United States. The agreement acknowledges that the companies’ products pose a significant risk, especially in a year when around 4 billion people worldwide are expected to participate in elections. The document hig...

The 20 Signatories Of The Pledge Include Tech Giants Like

The 20 signatories of the pledge include tech giants like Microsoft, Google, Adobe, Amazon, Anthropic, Arm, ElevenLabs, IBM, Inflection AI, LinkedIn, McAfee, Meta, Nota, OpenAI, Snap, Stability AI, TikTok, TrendMicro, Truepic and X. However, the accord is voluntary and doesn’t go as far as a complete ban on AI content in elections. The 1,500-word document outlines eight steps the companies commit ...