Big Tech S New Rules Of The Road For Ai And Elections Governing

Bonisiwe Shabane
-
big tech s new rules of the road for ai and elections governing

In the United States, regulatory oversight of artificial intelligence remains minimal. The landscape is complex, shaped by a mix of fragmented governance and the strong influence of major technology companies, which often work to steer and shape emerging rules. This dynamic is sometimes described as the “anti-Brussels” effect. While the European Commission advances a robust, risk-based regulatory framework for AI, the U.S. is moving in a different direction. It is prioritizing industry flexibility and federal restraint over a strict, harmonized regulatory regime.

At the federal level, the U.S. still lacks a comprehensive AI law. The recently announced US AI Action Plan, published in the summer of 2025, was crafted in the spirit of permissionless innovation. This policy concept avoids preemptive, restrictive regulation that could stifle innovation and growth. It allows individuals and companies to develop and deploy new technologies without seeking prior approval from regulators. This approach encourages open experimentation and emphasizes the rapid deployment of AI systems in startup, commercial, and government sectors, promoting private-sector collaboration over top-down control.

In this federal vacuum, some states have moved forward with AI-specific laws, including California, Colorado, Utah, and Illinois. However, the national picture remains fragmented and incomplete. Efforts by Capitol Hill to assert federal primacy have raised concerns. In mid-2025, the U.S. House passed a reconciliation package that included a provision to prohibit states from enforcing any AI-related law or regulation for the next ten years. Although the Senate ultimately struck down the 10-year moratorium by a 99-1 vote, the fact that such a measure was even considered highlights the ongoing struggle between regulators, industry, and legislators over how to...

While federal regulation remains light, states are stepping in, but their efforts are few and far between. The possibility of pre-emptive federal legislation—or no regulation at all—looms large. The consequences include a patchwork of rules, regulatory uncertainty for companies, and significant influence by private actors, especially the enterprises that own the world's largest LLM frontier models. For the first time, Washington is getting close to deciding how to regulate artificial intelligence. And the fight that’s brewing isn’t about the technology—it’s about who gets to do the regulating. In the absence of a meaningful federal AI standard that focuses on consumer safety, states have introduced dozens of bills to protect residents against AI-related harms, including California’s AI safety bill SB-53 and Texas’...

The tech giants and buzzy startups born out of Silicon Valley argue such laws create an unworkable patchwork that threatens innovation. “It’s going to slow us in the race against China,” Josh Vlasto, co-founder of pro-AI PAC Leading the Future, told TechCrunch. The industry, and several of its transplants in the White House, is pushing for a national standard or none at all. In the trenches of that all-or-nothing battle, new efforts have emerged to prohibit states from enacting their own AI legislation. AI is eminently capable of political persuasion and could automate it at a mass scale. We are not prepared.

In January 2024, the phone rang in homes all around New Hampshire. On the other end was Joe Biden’s voice, urging Democrats to “save your vote” by skipping the primary. It sounded authentic, but it wasn’t. The call was a fake, generated by artificial intelligence. Today, the technology behind that hoax looks quaint. Tools like OpenAI’s Sora now make it possible to create convincing synthetic videos with astonishing ease.

AI can be used to fabricate messages from politicians and celebrities—even entire news clips—in minutes. The fear that elections could be overwhelmed by realistic fake media has gone mainstream—and for good reason. But that’s only half the story. The deeper threat isn’t that AI can just imitate people—it’s that it can actively persuade people. And new research published this week shows just how powerful that persuasion can be. In two large peer-reviewed studies, AI chatbots shifted voters’ views by a substantial margin, far more than traditional political advertising tends to do.

In the coming years, we will see the rise of AI that can personalize arguments, test what works, and quietly reshape political views at scale. That shift—from imitation to active persuasion—should worry us deeply. Leaders in the tech industry, including OpenAI, Microsoft, TikTok, X, Meta, Amazon, and Google, disclosed a new agreement – ‘Tech Accord to Combat Deceptive Use of AI in 2024 Elections’, that aims to minimize... elections. Companies participating in the agreement agreed to the following commitments: According to the agreement, these companies are expected to set up controls for AI-generated content, including audio, video, and images that can mislead voters, election officials, and candidates.

This includes efforts such as detecting and labeling AI-generated and modified content. However, the agreement does not cover bans on the use and distribution of such content. See More: Malicious Intent: Microsoft and OpenAI Identify APT Groups Weaponizing GenAI LLMs The agreement was first unveiled at the Munich Security Conference, which included people from the intelligence community, heads of state, diplomats, and military officials. It is a voluntary commitment by tech companies to develop and use tech to detect and mark content created by AI and assess software for potential abuse. This week in The Dispatch we're covering how candidates who ran on Big Tech accountability swept key races, a federal judge orders OpenAI to hand over deleted evidence in copyright trial, and New York...

Welcome back to The Dispatch from The Tech Oversight Project, your weekly updates on all things tech accountability. Follow us on Twitter at @Tech_Oversight and @techoversight.bsky.social on Bluesky. 🗳️ BIG TECH ACCOUNTABILITY GOES MAINSTREAM: Last Tuesday, Democrats won their elections on four very different battlegrounds. From Virginia to New Jersey to New York and Pennsylvania, candidates ran on different issues before different electorates. But across all of them, the message cuts the same way: voters are done subsidizing Big Tech’s power, energy appetites, and predatory business models. These weren’t boutique issues.

They were cost-of-living, fairness, and democracy fights refracted through technology — and they resonated across age, income, and geography. Together, they proved that Big Tech accountability is now a mainstream political demand. Virginia’s data center boom — once sold as neutral “jobs and infrastructure” — became the state’s defining affordability fight, and one of the cycle's biggest sleeper issues. The industry’s massive energy demands are now driving up power costs, straining the grid, and fueling local backlash from residents who say they’re footing Big Tech’s electric bill. Governor-elect Abigail Spanberger tied her campaign to a simple, moral equation: Big Tech should pay its own electric bill. She called for data centers to have their own rate class, refusing to let tech giants offload industrial-scale costs onto ordinary ratepayers.

Her win cemented data centers as the new political fault line between economic development and exploitation. In October 2025, a report from Senator Bernie Sanders grabbed headlines for the provocative claim that AI-driven automation could destroy 100 million jobs across the country in the next decade. While the report sparked a debate about which occupations are most vulnerable to automation, the real issue is who gets to decide how AI is deployed in the workplace, what purpose is served, and... Both businesses and governments have already begun to use AI as a tool for surveillance and union busting. Meanwhile, the tech companies developing these AI systems are investing hundreds of millions of dollars in lobbying to block legislation that would impose any guardrails on AI. But where workers have organized and built collective power, they are successfully establishing protections against harmful uses of AI in the workplace through collective bargaining, leveraging union representation to shape AI policy at the...

Where workers are meaningfully involved in decisions about AI, they are also demonstrating how these systems can be designed and deployed to ensure that technology serves people rather than replaces them. Employers’ unchecked use of AI poses many harms to workers and worker power. Most of the world's largest tech companies, including Amazon, Google and Microsoft, have agreed to tackle what they are calling deceptive artificial intelligence (AI) in elections. The twenty firms have signed an accord committing them to fighting voter-deceiving content. They say they will deploy technology to detect and counter the material. But one industry expert says the voluntary pact will "do little to prevent harmful content being posted".

The Tech Accord to Combat Deceptive Use of AI in 2024 Elections was announced at the Munich Security Conference on Friday.

People Also Search

In The United States, Regulatory Oversight Of Artificial Intelligence Remains

In the United States, regulatory oversight of artificial intelligence remains minimal. The landscape is complex, shaped by a mix of fragmented governance and the strong influence of major technology companies, which often work to steer and shape emerging rules. This dynamic is sometimes described as the “anti-Brussels” effect. While the European Commission advances a robust, risk-based regulatory ...

At The Federal Level, The U.S. Still Lacks A Comprehensive

At the federal level, the U.S. still lacks a comprehensive AI law. The recently announced US AI Action Plan, published in the summer of 2025, was crafted in the spirit of permissionless innovation. This policy concept avoids preemptive, restrictive regulation that could stifle innovation and growth. It allows individuals and companies to develop and deploy new technologies without seeking prior ap...

In This Federal Vacuum, Some States Have Moved Forward With

In this federal vacuum, some states have moved forward with AI-specific laws, including California, Colorado, Utah, and Illinois. However, the national picture remains fragmented and incomplete. Efforts by Capitol Hill to assert federal primacy have raised concerns. In mid-2025, the U.S. House passed a reconciliation package that included a provision to prohibit states from enforcing any AI-relate...

While Federal Regulation Remains Light, States Are Stepping In, But

While federal regulation remains light, states are stepping in, but their efforts are few and far between. The possibility of pre-emptive federal legislation—or no regulation at all—looms large. The consequences include a patchwork of rules, regulatory uncertainty for companies, and significant influence by private actors, especially the enterprises that own the world's largest LLM frontier models...

The Tech Giants And Buzzy Startups Born Out Of Silicon

The tech giants and buzzy startups born out of Silicon Valley argue such laws create an unworkable patchwork that threatens innovation. “It’s going to slow us in the race against China,” Josh Vlasto, co-founder of pro-AI PAC Leading the Future, told TechCrunch. The industry, and several of its transplants in the White House, is pushing for a national standard or none at all. In the trenches of tha...