Trump And AI Videos: Yay or Nay?
Artificial intelligence and AI-generated videos are more prevalent than ever in political messaging, making disinformation harder to spot and easier to spread.
💬 Quick CONVERSATION STARTERS:
Thoughts about how Trump leverages AI for political messaging?
This was not the first time that President Donald Trump posted an AI-generate video — thus fake.
In the 45-second fake video, Trump and former President Barack Obama are seen sitting in the Oval Office at the White House before an AI-generated Obama is arrested by agents as the song “YMCA” plays in the background.
The AI-generated version of Trump grins as Obama is apprehended and eventually shown behind bars. The AI-generated Obama is also seen wearing an orange jumpsuit behind bars.
The video surfaced amid allegations from Director of National Intelligence Tulsi Gabbard that Obama and his intel chiefs manufactured the Russia collusion narrative.
In the post, Trump offered no direct commentary on the footage, which included the caption: “No one is above the law.”
In his news analysis, news creator
explains:“Obama has complete presidential immunity, just like Trump did, that’s what the Supreme Court said.”
“This is again another Donald Trump is going to after his political opponents. He’s targeting those that he thinks wronged him, even though there is no evidence that they wronged him.”
According to a report by Newsweek, “While some Trump supporters cheered the video and called for Obama's arrest, others questioned whether it was an attempt to deflect from the Epstein case.”
No matter where you stand in this political debate, this latest AI-generated fake video comes amid growing scrutiny over the use of artificial intelligence and generative AI in political messaging — especially as misinformation becomes harder to spot and easier to spread.
Are concerns overblown or real?
A recent paper published by the Knight First Amendment Institute at Columbia University, reviewed current evidence on the impact of GenAI in the 2024 elections and identified several reasons why the impact of GenAI on elections has been overblown.
These include, the authors say, the inherent challenges of mass persuasion, the complexity of media effects and people’s interaction with technology, the difficulty of reaching target audiences, and the limited effectiveness of AI-driven microtargeting in political campaigns.
The paper shows:
AI will increase the quantity of information and misinformation around elections. But fears about AI-driven increases in misinformation are missing the mark because they focus too heavily on the supply of information and overlook the role of demand. People consume and share (mis)information because it aligns with their worldviews and seek out sources that cater to their perspectives.
AI will increase the quality of election misinformation. However, humans have been able to write fake text and tell lies since the dawn of time, but they have found ways to make communication broadly beneficial by holding each other accountable, spreading information about others’ reputations, or punishing liars and rewarding good informants. We expect these safeguards to hold even under conditions where content of lifelike fidelity can be created—and is available—at scale.
AI will improve the personalization of information and misinformation at scale. But AI could in theory also expand informational equality by delivering high-quality, language-appropriate content to communities that mainstream media often underserve, including linguistic minorities, first-time voters, and rural electorates. The democratic question is therefore less about whether personalisation occurs and more about whether citizens retain exposure to diverse viewpoints and qualitative information.
Their conclusions?
The authors of the Knight First Amendment Institute report believe that the threat isn’t a tsunami of AI-generated content — it’s rather the pre-existing demand for information that confirms our biases. Flooding an already saurated information ocean with more drops of content — no matter how cheap to produce — does little to change who sees what, or why. Nor is the danger in higher-quality fakes. The effectiveness of misinformation hinges on the credibility of its source and its narrative appeal, not its technical perfection. A “good enough” lie that fits a political agenda has always been more potent than a perfectly rendered but irrelevant falsehood.
Finally, the promise of hyper-personalized persuasion at scale runs into the same bottlenecks that have always constrained political advertising: messy data, scarce attention, the high cost of delivery, and the simple, stubborn fact that it’s incredibly difficult to change someone’s mind.
Is AI wearing down democracy?
According to a recent New York Times analysis “content generated by artificial intelligence has become a factor in elections around the world. Most of it is bad, misleading voters and discrediting the democratic process.”
The New York Times writes: “Free and easy to use, AI tools have generated a flood of fake photos and videos of candidates or supporters saying things they did not or appearing in places they were not — all spread with the relative impunity of anonymity online. The technology has amplified social and partisan divisions and bolstered antigovernment sentiment, especially on the far right, which has surged in recent elections in Germany, Poland and Portugal.”
The paper explains there is evidence from 50 countries that AI has already transformed elections.