In Big Election Year, A.I.’s Architects Move Against Its Misuse

Artificial intelligence companies have been at the vanguard of developing the transformative technology. Now they are also racing to set limits on how A.I. is used in a year stacked with major elections around the world.

Last month, OpenAI, the maker of the ChatGPT chatbot, said it was working to prevent abuse of its tools in elections, partly by forbidding their use to create chatbots that pretend to be real people or institutions. In recent weeks, Google also said it would limit its A.I. chatbot, Bard, from responding to certain election-related prompts to avoid inaccuracies. And Meta, which owns Facebook and Instagram, promised to better label A.I.-generated content on its platforms so voters could more easily discern what information was real and what was fake.

On Friday, Anthropic, another leading A.I. start-up, joined its peers by prohibiting its technology from being applied to political campaigning or lobbying. In a blog post, the company, which makes a chatbot called Claude, said it would warn or suspend any users who violated its rules. It added that it was using tools trained to automatically detect and block misinformation and influence operations.

“The history of A.I. deployment has also been one full of surprises and unexpected effects,” the company said. “We expect that 2024 will see surprising uses of A.I. systems — uses that were not anticipated by their own developers.”

The efforts are part of a push by A.I. companies to get a grip on a technology they popularized as billions of people head to the polls. At least 83 elections around the world, the largest concentration for at least the next 24 years, are anticipated this year, according to Anchor Change, a consulting firm. In recent weeks, people in Taiwan, Pakistan and Indonesia have voted, with India, the world’s biggest democracy, scheduled to hold its general election in the spring.

How effective the restrictions on A.I. tools will be is unclear, especially as tech companies press ahead with increasingly sophisticated technology. On Thursday, OpenAI unveiled Sora, a technology that can instantly generate realistic videos. Such tools could be used to produce text, sounds and images in political campaigns, blurring fact and fiction and raising questions about whether voters can tell what content is real.

A.I.-generated content has already popped up in U.S. political campaigning, prompting regulatory and legal pushback. Some state legislators are drafting bills to regulate A.I.-generated political content.

Last month, New Hampshire residents received robocall messages dissuading them from voting in the state primary in a voice that was most likely artificially generated to sound like President Biden. The Federal Communications Commission last week outlawed such calls.

“Bad actors are using A.I.-generated voices in unsolicited robocalls to extort vulnerable family members, imitate celebrities and misinform voters,” Jessica Rosenworcel, the F.C.C.’s chairwoman, said at the time.

A.I. tools have also created misleading or deceptive portrayals of politicians and political topics in Argentina, Australia, Britain and Canada. Last week, former Prime Minister Imran Khan, whose party won the most seats in Pakistan’s election, used an A.I. voice to declare victory while in prison.

In one of the most consequential election cycles in memory, the misinformation and deceptions that A.I. can create could be devastating for democracy, experts said.

“We are behind the eight ball here,” said Oren Etzioni, a professor at the University of Washington who specializes in artificial intelligence and a founder of True Media, a nonprofit working to identify disinformation online in political campaigns. “We need tools to respond to this in real time.”

Anthropic said in its announcement on Friday that it was planning tests to identify how its Claude chatbot could produce biased or misleading content related to political candidates, political issues and election administration. These “red team” tests, which are often used to break through a technology’s safeguards to better identify its vulnerabilities, will also explore how the A.I. responds to harmful queries, such as prompts asking for voter-suppression tactics.

In the coming weeks, Anthropic is also rolling out a trial that aims to redirect U.S. users who have voting-related queries to authoritative sources of information such as TurboVote from Democracy Works, a nonpartisan nonprofit group. The company said its A.I. model was not trained frequently enough to reliably provide real-time facts about specific elections.

Similarly, OpenAI said last month that it planned to point people to voting information through ChatGPT, as well as label A.I.-generated images.

“Like any new technology, these tools come with benefits and challenges,” OpenAI said in a blog post. “They are also unprecedented, and we will keep evolving our approach as we learn more about how our tools are used.”

(The New York Times sued OpenAI and its partner, Microsoft, in December, claiming copyright infringement of news content related to A.I. systems.)

Synthesia, a start-up with an A.I. video generator that has been linked to disinformation campaigns, also prohibits the use of technology for “news-like content,” including false, polarizing, divisive or misleading material. The company has improved the systems it uses to detect misuse of its technology, said Alexandru Voica, Synthesia’s head of corporate affairs and policy.

Stability AI, a start-up with an image-generator tool, said it prohibited the use of its technology for illegal or unethical purposes, worked to block the generation of unsafe images and applied an imperceptible watermark to all images.

The biggest tech companies have also weighed in. Last week, Meta said it was collaborating with other firms on technological standards to help recognize when content was generated with artificial intelligence. Ahead of the European Union’s parliamentary elections in June, TikTok said in a blog post on Wednesday that it would ban potentially misleading manipulated content and require users to label realistic A.I. creations.

Google said in December that it, too, would require video creators on YouTube and all election advertisers to disclose digitally altered or generated content. The company said it was preparing for 2024 elections by restricting its A.I. tools, like Bard, from returning responses for certain election-related queries.

“Like any emerging technology, A.I. presents new opportunities as well as challenges,” Google said. A.I. can help fight abuse, the company added, “but we are also preparing for how it can change the misinformation landscape.”


By William Brown

You May Also Like