According to the Chinese zodiac, 2024 is the year of the dragon. Having a mythical beast be the symbol of the year 2024 is going to be a good match for an election year that is going to be dominated by mythical (dis)information. The problem is that the big tech companies are abandoning their roles as watchdogs looking for and beating down disinformation. Elon Musk has an ideological position that lies are just as important as the truth so he is against banning lies or even labeling them as such. If Donald Trump comes home (see above), he will surely use the lack of any vetting or censorship to the hilt, spewing falsehoods as fast as his stubby fingers can type.
In 2016, Russian trolls flooded Twitter with disinformation in an effort to elect Donald Trump. In 2020, the then-management of Twitter made an effort to police the site and remove the most egregious lies. It now looks like 2024 will be a repeat of 2016, at least on Twitter, only with more juice. New AI tools make it much easier to automate trolling and flood the zone with false information. They weren't available in 2016. Emily Bell, a professor of Journalism at Columbia University, said: "Musk has taken the bar and put it on the floor."
At Facebook, the situation is different. Over 20,000 workers have been laid off, making it harder for Facebook to enforce its own rules, even if it wanted to. The cuts hit the team responsible for policing fraud, harassment, and offensive content especially hard. After all, that team doesn't generate any profits. Earlier this month, someone posted a fake photo of Gov. J.B. Pritzker (D-IL) allegedly signing a bill allowing undocumented immigrants to become police officers and sheriff's deputies. He did no such thing, but the caption read: "In Illinois American citizens will be arrested by illegals." Is that going to make Trumpy voters be sure to vote in Illinois (and elsewhere)? In the immortal words of Sarah Palin, "You betcha!"
But at Facebook, it isn't only a matter of lack of resources to police disinformation. Facebook has created a new program to allow users to opt out of fact checking, so they can see anything posted, even things Facebook's staff has labeled as out-and-out lies. Global Affairs President Nick Clegg said: "We feel we've moved quite dramatically in favor of giving users greater control over even quite controversial sensitive content." In other words, if users want to treat lies and the truth as equals, who are we to stop them?
Banning lies on social media is intensely political. Katie Harbath, the former director of public policy at Facebook, said: "For Democrats, we weren't taking down enough, and for Republicans we were taking down too much. The result was an overall sense that after doing all this, we're still getting yelled at. It's just not worth it anymore." In other words, you can't make the people who tell lies and the people who oppose them both happy, so why bother?
One thought that came up was to end all political advertising on the site. But in July 2022, Mark Zuckerberg killed the idea. Besides, the problem isn't only false ads. It is also deliberately false postings by users, including politicians. For example, Mark Finchem, who was running for Arizona secretary of state in 2022, made a posting to Facebook saying that his opponent, Adrian Fontes, was a member of the Chinese Communist Party and a cartel criminal who had rigged elections before. If Fontes had not been a public figure, the resulting defamation lawsuit would have cost Finchem tens of millions of dollars, but as Fontes is a public figure, his winning such a lawsuit would have been very difficult.
One as yet unanswered question is what is Meta's new brainchild, Threads, a Twitter clone, going to do about policing content. So far, executives have merely said that they will not encourage politics and hard news. But some folks don't need much encouragement. Or any.
The problem of disinformation is not limited to the U.S. Mexico also has a presidential election in 2024, and the lies and false information have already started. For example, social media there are already flooded with statements that the mayor of Mexico City, Claudia Sheinbaum, who was an environmental scientist before becoming a politician and who is a likely presidential candidate, was born in Bulgaria and thus ineligible to be president (Sheinbaum's mother was born in Bulgaria, but Sheinbaum herself was born in Mexico City). Expect disinformation to be a worldwide plague going forward.
Can anything be done? Maybe. When a newspaper publishes a false defamatory item about someone, yelling "FIRST AMENDMENT" doesn't get them off the hook. That is not true for electronic media, but Congress could pass a law saying that the same rules that apply to newspapers, which have been well tested in the courts, also apply to digital media. The argument that "we publish so many user-generated messages that we can't vet them all" would likely be met with "Then publish fewer of them and check them all." But, of course, nothing is going to happen because Republicans in Congress see tech companies banning lies as outrageous censorship so they are never going to pass a law allowing people to sue the companies for publishing the lies.
Not all disinformation is on social media, though. The FSB has a program in which individual Russians talk to "useful idiots" in the U.S. to spread Russian propaganda. This makes it harder to finger the FSB as the source of the disinformation. The Russian mouthpieces are almost always unaware they are spreading lies concocted by the FSB. When the propaganda reaches a U.S. journalist, he is then getting it from an American, even though the ultimate source is the FSB. For example, the FSB made up a story that the White Helmets, a humanitarian group operating in Syria, was running a black market in human organs and had faked chemical attacks by Syrian President Bashar al-Assad. The story went through some intermediaries and ended up being broadcast by the far-right outlet OANN. (V)