Kate, the Princess of Wales, released a video last week in which she shared her cancer diagnosis, which followed major abdominal surgery in January. The statement was intended to not just inform the public but also put to rest rumors and speculation that had grown ever more convoluted and obsessive since Britain’s future queen disappeared from public view following a formal appearance on Christmas Day.
However, only minutes after the video’s release, social media swarmed with claims that the video was a fake generated using AI.
Some of that suspicion was understandable, considering that a recent photograph released by the royal family had been manipulated using photo-editing tools. Kate admitted as much earlier this month. But claims that the video was fake persisted even after BBC Studios confirmed its authenticity in the best way possible, not through some complicated analysis of the imagery, but by making it clear that they filmed the video.
For years, when people warned against the threat of AI tools generating audio, images, and video, the biggest concern was that these tools might be used to generate convincing fakes of public figures in compromising positions. But the flip side of that possibility may be an even greater issue. Just the existence of these tools is threatening to deplatform reality.
In another recent flap, “deepfake” tools were used to create pornographic images of music icon Taylor Swift. These images, which seem to have originated with an informal contest in the toxic cesspool of the 4chan message board, spread so quickly across the poorly moderated social media site X (formerly Twitter) that it was forced to shut down searches involving Swift’s name while it looked for a way to deal with them. The images were seen millions of times before X’s deliberately weakened defenses managed to clear many of the images from the platform.
But what’s happening with Kate bears an even more striking resemblance to another case of real video being called out as fake—and this one is an even bigger signal of what’s ahead as we roll toward Election Day.
On March 12, Donald Trump posted on Truth Social, attacking Biden and a series of videos.
The Hur Report was revealed today! A disaster for Biden, a two tiered standard of justice. Artificial Intelligence was used by them against me in their videos of me. Can’t do that Joe!
Trump appears to be referring to a series of 32 videos that were shown during the House hearing in which former special counsel Robert Hur testified. Those videos, shown by House Democrats, contained instances in which Trump failed to recall the names of foreign leaders, mispronounced simple words (including “United States”), referenced Barack Obama when he meant Joe Biden, and delivered a plethora of nonsensical asides. That last category included a claim that windmills were killing whales.
All of the videos were real clips taken from Trump’s public appearances. But his dismissal of them as being the product of AI shows just how simple it is to plant doubt about any event, no matter how public or well-documented.
For the past year, even as AI image generation has steadily improved and AI videos have moved beyond laughable curiosities, there has been a false confidence that the veracity of these images could always be discerned. Many people continue to believe that a close look at the eyes, limbs, or fingers in generated images will locate some telltale flaws. Or that, even for the rare AI image that might fool the naked eye, some software tool would easily unravel the deception.
Wall Street Journal commentators are quick to point out issues with these sample videos created using OpenAI’s Sora tool. What they’re not pointing out is that this is very, very early work by this system.
The era in which AI-generated imagery can be readily spotted is already fading. As the companies and personalities behind these tools are fond of saying, these systems will never be worse than they are now. From here, they will only get better. The images they create will only get more realistic and more difficult to separate from those generated using a camera aimed at the physical world.
The line between what’s real and fake is becoming very blurry, very quickly. However, even if it never fades completely, that may not matter. Much less sophisticated tools from five years ago could not only fool people but also erode trust when it came to online imagery. Most people are simply not going to scrutinize each image for flaws. Or dismiss claims that a real image is AI-generated.
Disinformation on social media isn’t just driving up hate speech and racism online, it’s also a core part of a declining belief in journalism. According to a Pew Research Center survey from 2022, adults under 30 had nearly as much trust in what they read on social media sites as they did in information from traditional news outlets. Across all age groups, there was a steep decline of trust in national news organizations across a period of only six years.
Social media, replete with attention-hungry trolls and Russian bot farms, turned social media sites into a stew of conspiracy theories and disinformation. Now it seems impossible to go a day without running into instances of AI tools being used to create false narratives. That can be AI audio used to reportedly smear a high school principal with faked recordings of racist and antisemitic remarks. A TikTok video purporting to capture the conversation between an emergency dispatcher and a survivor of the Francis Scott Key Bridge collapse went viral, despite being fake. Or—and this is sadly real—video ads promoting erectile dysfunction medications and Russian dictator Vladimir Putin using stolen images of online influencers.
That pro-Putin video is unlikely to be a coincidence. Just as with other disinformation, Russia has been at the forefront of using generative AI tools as part of its expanding disinformation campaigns. Even many of the rumors connected to Kate go back to a Kremlin-linked group in a scheme that “appeared calculated to inflame divisions, deepen a sense of chaos in society, and erode trust in institutions—in this case, the British royal family and the news media,” according to The New York Times.
The time when AI tools can be used to confidently generate a video of Joe Biden taking a bribe, election workers tossing Trump ballots in the trash, or Trump hitting a decent golf shot may be months away … but it’s no more than months. The mechanisms for identifying and removing such convincing disinformation from social media are not only weak, they are largely nonexistent.
But even before the flood of fakes arrives, we’re having to deal with what may be the more debilitating effect of this improving suite of tools—a profound doubt about the statements, images, and videos that are not fake. We’re entering a world where there is no agreed-on authority, not even the evidence of your own eyes.
In the war on reality, reality is badly outnumbered.
Another special election just delivered still more bad news for the GOP, but Democrat Marilyn Lands’, well, landslide should really have Republicans quaking. As we explain on this week’s episode of The Downballot, this was the first test of in vitro fertilization at the ballot box since the Alabama Supreme Court’s ruling that imperiled the procedure, and Republicans failed spectacularly—with dire implications for November.
Campaign Action