The Republican National Committee fired off an attack ad as soon as President Joe Biden announced his reelection campaign last week.

The 30-second spot which used fake visuals of China invading Taiwan, financial markets crashing and immigrants overrunning the border sported a disclaimer: “Built entirely with AI imagery.”

The ad – which the GOP called “an AI-generated look into the country’s possible future if Joe Biden is re-elected in 2024” – is a sign of what’s to come in the 2024 presidential election, experts say.

2024 promises to be the first AI election cycle with artificial intelligence potentially playing a pivotal role at the ballot box. And that’s raising concerns.

Even as technology grows more sophisticated and powerful, spreading into all aspects of American life, there are still very few rules governing its use.

Spurred by the Biden attack ad, Rep. Yvette D. Clarke, D-N.Y., introduced a bill Tuesday that would require that political ads disclose the use of AI-generated imagery.

“The upcoming 2024 election cycle will be the first time in U.S. history where AI generated content will be used in political ads by campaigns, parties, and Super PACs,” Clarke said in a statement. “If AI-generated content can manipulate and deceive people on a large scale, it can have devastating consequences for our national security and election security.” 

Political campaigns are pressure testing AI for everything from fundraising emails to get-out-the-vote chatbots, Nathan Sanders, a data scientist and an affiliate at the Berkman Klein Center at Harvard University, and Bruce Schneier, a fellow and lecturer at the Harvard Kennedy School, wrote in The Atlantic.

“Previous technological revolutions – railroad, radio, television, and the World Wide Web – transformed how candidates connect to their constituents, and we should expect the same from generative AI,” Sanders and Schneier wrote.

Best-case scenario: AI gets voters more engaged and decreases polarization, they said. Worst-case scenario: AI is used to mislead or manipulate voters.

“AI will enable instant responses and more precise voter targeting,” said Darrell West, a senior fellow at the Center for Technology Innovation at the Brookings Institution.

What’s setting off alarm bells: The potential to use AI for dirty tricks, such as “deepfakes,” videos and images that have been digitally created or altered with AI or machine learning to make it appear as if people have said or done things they have not. 

“This will be the first AI election that draws on digital tools that can generate videos, pictures, audiotapes and many other things,” West said. “There is a risk that disinformation will expand and expose voters to false material that will look authentic. Mass manipulation is dangerous for democracy because it could distort voter decision-making. Right now, there is no required disclosure so voters may not even know that the videos are fake.”

According to usatoday.com. Source of photos: internet