
A compilation of AI deepfakes of world leaders. Credit: Sky News
In the middle of October, the Ontario government in Canada sponsored an ad that featured archival clips of former US president Ronald Reagan. As edited together, the clips portray Reagan as opposing tariffs, which stands in contrast to Trump’s current international economic policy of placing tariffs on any and every territory in the world, including Canada. It’s an international spat done via archival footage edited and broadcast over the airwaves.
Roughly a week later, genAI videos of Reagan were popping up on Sora, the genAI social media app. People had generated outlandish videos of him doing things he never actually did including spinning on the floor of a store, speaking at Wrestlemania, and prostrating himself in a mall wearing cat ears. Several of the clips even had the archival “look” that it was shot on tape, a feature intrinsic to the broadcast media of his era. This is the more worrying aspect of these genAI videos as it exploits historical features to seem more real than it actually is.
As genAI videos became more lifelike, many have sounded the alarm about it being used negatively. In May, Trump signed the “Take It Down Act” that outlawed deepfake porn that depict real people in sexual scenes. At a larger scale and more worrisome is the ability for a malevolent actor or actors to exploit the realness of these videos to sow society-wide chaos, like through stoking racial animosity, spreading disinformation, affecting political outcomes, or muddling the information sphere with so much unverifiable statements that people become unaware of reality. And some of these trends have already started happening.
Around the same time as these genAI “Reagan” videos were circulating, so too were several genAI videos depicting Black women declaring threats of violence if they did got their SNAP benefits or admitting to SNAP fraud. These were not real videos but had the effect of perpetuating the racist stereotypes of a “welfare queen” and that Black people are violent and lazy. And people in the comments seemed to believe they're real people, or even if they recognize it’s genAI, still believe the stereotypes.
Also within the past month, another worrying trend of genAI began surfacing: fake videos that depicted arrests from police bodycam angles. People immediately pointed out the potential harm of these: that people could be falsely portrayed as committing crimes.
With historical-esque genAI videos, people could distort the historical record in current popular perception. Another trend within the past couple months is genAI videos depicting children who are nostalgic for early 2000s Walmart. Odd enough as it is, it seems to target older millennials to convince them that the best years are behind them and that they involved low prices. It’s an incredible oversimplification of early 2000s that verges on fantasy but exploits their current dissatisfactions with society.
In regards to historical distortion, a step further in the extreme is genAI videos depicting historical politicians. For Ronald Reagan specifically, malevolent actors could exploit the increasing realness of genAI to pump out fraudulent videos that depict him declaring policy ideas he never said or remarks he never gave. And this could be used as pretext to enact similar policies or espouse similar ideologies and attempt to establish precedent where none exists. Even more malevolent than that is malevolent actors creating genAI videos of the current president saying equally untrue statements. Anything could range from banal non sequiturs as he usually really does to alleged foreign policy to declaring war.
Or, alternatively and more likely, is the government uses this technology itself to manufacture crises that it must respond to. Recent social media posts by official government accounts not only show their comfort and use of genAI images depicting their ideologies and policy goals, but they also deceptively edit various videos from different time periods and places to create a sense that a particular instance was especially chaotic. Equally crucial to this worrying potential is the ease of which these genAI videos and other content can be created and shared. To make 10 genAI videos with Sora costs $4 and social media algorithms reward your video if it’s controversial, allowing it to spread through countless online communities.
These ingredients: believable depictions, cultural pressure points, ease of creation, and exploitative social media algorithms, and combined with the fact that some of it is already happening, makes it obvious that we are currently on a path towards extreme social disruptions cause by genAI videos. This is a final warning to stop making this content, stop sharing it, and stop these companies from having this technology and profiting from it.
Andrew Proctor is a film editor, queer film historian, and creative producer, as well as The Forum’s Creative Director. He resides in the New York City area.
