Everyone in the media business is keenly aware that AI is going to be used to create fake texts, fake images, and fake videos during the 2024 campaigns. No one (except the fakers) is really in favor of these things, but banning them runs into First Amendment and other issues. Meta, the parent company of Facebook and Instagram, is taking a baby step here by asking people who are running political ads that use AI-generated imagery to label them as such.
That sounds good, but is hardly a solution. Among other problems are:
Joe Biden signed an executive order intended to encourage honest players to admit what they are doing. But what about dishonest or out-and-out malevolent players (think: the St. Petersburg troll farm)?
AI-produced ads have already run. In April, the RNC ran a fake ad showing the future of the U.S. if Biden is reelected. It showed boarded-up store fronts, armored military patrols in the streets, and waves of immigrants producing panic. In June, Ron DeSantis ran an AI-produced ad attacking Donald Trump by showing him hugging Anthony Fauci. Here is a news story about this. It won't be the last one:
Google has a similar policy already in place. The problem, again, is malevolent actors who use AI and don't label their photos and images as fake. (V)