When pundits and researchers tried to guess what sort of manipulation campaigns might threaten the 2018 and 2020 elections, misleading AI-generated videos often topped the list. Though the tech was still emerging, its potential for abuse was so alarming that tech companies and academic labs prioritized working on, and funding, methods of detection. Social platforms developed special policies for posts containing “synthetic and manipulated media,” in hopes of striking the right balance between preserving free expression and deterring viral lies. But now, with about three months to go until November 3, that wave of deepfaked moving images seems never to have broken. Instead, another form of AI-generated media is making headlines, one that is harder to detect and yet much more likely to become a pervasive force on the internet: deepfake text.
Last month brought the introduction of GPT-3, the next frontier of generative writing: an AI that can produce shockingly human-sounding (if at times surreal) sentences. As its output becomes ever more difficult to distinguish from text produced by humans, one can imagine a future in which the vast majority of the written content we see on the internet is produced by machines. If this were to happen, how would it change the way we react to the content that surrounds us?
This wouldn’t be the first such media inflection point where our sense of what’s real shifted all at once. When Photoshop, After Effects, and other image-editing and CGI tools began to emerge three decades ago, the transformative potential of these tools for artistic endeavors—as well as their impact on our perception of the world—was immediately recognized. “Adobe Photoshop is easily the most life-changing program in publishing history,” declared a Macworld article from 2000, announcing the launch of Photoshop 6.0. “Today, fine artists add finishing touches by Photoshopping their artwork, and pornographers would have nothing to offer except reality if they didn’t Photoshop every one of their graphics.”
We came to accept that technology for what it was and developed a healthy skepticism. Very few people today believe that an airbrushed magazine cover shows the model as they really are. (In fact, it’s often un-Photoshopped content that attracts public attention.) And yet, we don’t fully disbelieve such photos, either: While there are occasional heated debates about the impact of normalizing airbrushing—or more relevant today, filtering—we still trust that photos show a real person captured at a specific moment in time. We understand that each picture is rooted in reality.
Generated media, such as deepfaked video or GPT-3 output, is different. If used maliciously, there is no unaltered original, no raw material that could be produced as