This is not a deepfake: Part 2, the solution
Deepfakes will eat reality if we don't do something, here's what Congress should do.
Yesterday I published Part 1 of this post, dealing with deepfakes and reality. If you haven’t read it, go ahead and read it now. I’ll wait.
To review: Some dude (or dudette, or whatever pronouns people use these days) can feed video of Rober or Donaldson into an AI and produce a realistic deepfake. That’s a problem that bleeds over from the “content creator” world of TikTok, Instagram and YouTube into the very real world of politics and conspiracies.
We can’t allow the world of lies and deepfakes to eat all of reality. This is one time when it should be clear that government should act. AI and deepfakes need to be regulated. I don’t think anyone can offer a decent argument against it. Even Elon Musk agrees. The question is: how? And how much?
To me, the simplest answer comes from a physical thing that’s very important to the world, namely currency. Engineers can spend a lifetime chasing algorithms to identify fakes, which will only make the fakes better at gaming the algorithms designed to identify them. The effort needs to start with identifying the real thing—real video and production—in a way that the fakes have a major challenge to duplicate.
The movie studios got their dream legislation to do this for DVDs and electronic content. The Digital Millennium Copyright Act (DMCA) was signed into law by President Bill Clinton in 1998. This law gives legal teeth to content owners who are protected by the copyright, and provides mechanisms to order content providers (like YouTube) to “takedown” violators.
Technologically speaking, the cryptographic system used to authenticate DMCA-protected content (movies, in whatever form, for instance) is protected by 17 USC § 1201 of federal law. Even “white hat” researchers are prevented from reverse engineering any encryption method used to protect content covered by DMCA. But that protection only covers content subject to copyright, not everything that’s true or generated by real cameras in the real world versus deepfakes emanating from the bowels of an integrated circuit.
How could DMCA work to help regulate AI? It can’t directly, but a similar law could give legal cover to a method that can help fence in AI. Let’s talk about folding money; moola, currency.
A lot of effort is put into the design of often-counterfeited bills like the $20 and the $100 bill. Everything from the paper (it’s not really paper, it’s more like linen), to the ink, to security threads, to tiny hidden features, is designed to make real money distinguishable from fake money. It’s a non-trivial, but possible effort to build such safeguards into real world cameras and smartphones.
Digital cameras already have a meta-tag system built into their images called EXIF that store all kinds of details about the camera, location, and settings used to record an image. Of course, EXIF can be easily faked, because it wasn’t designed to be a security feature. But there is a mechanism that allows EXIF to store cryptographic keys, called XTIA. There’s also a field of cryptography called “steganography” which means hiding secret things inside other non-secret things. It’s the basis for certain kinds of digital watermarking, and also the method used to trace printed materials down to the printer that produced them.
Congress should create a law to require camera and smartphone manufacturers to include digital watermarks and encryption in images taken from a real world camera. These would be tagged to a manufacturer and even the serial number of the device itself, and authenticated with a protected private key held by the manufacturer. The public key would be added to the EXIF metadata, and perhaps XTIA can be used to protect keys to authenticate the image’s “hash digest”—a mathematical sort of checksum that proves the image or video hasn’t been altered.
The cryptosystem used to protect real images is important, and smart people can craft that solution (with rigorous testing and public comment). What’s more important is to give the technical solution legal teeth by making circumvention of it a crime, like it is with DMCA. This legally protects real-world, actual content from being deepfaked.
Of course, everything is edited, but the editors can also read the metadata from the actual source clips, and include a warning, or an assurance if that’s a better way, that video in the final product is “real”—with some kind of threshold of quality. Is it 50% real, or 70% real, or is it 0% real? Many copiers and scanners are prohibited by manufacturers from copying currency because it’s a crime to do so. Software companies like Adobe and Apple have an interest in complying with laws and also keeping deepfakes from ruining their market. Open source products like Blender would be a different matter, but I think would eventually create methods for discerning authentic versus fakes.
And there are significant privacy issues dealing with storing data that connects the device, location, and owner to the image. These would need to be addressed, but though they are important, they are not more important than stemming the unlimited growth of real-looking lies, and the ability to discern actual truth.
Congress has to treat real things like currency. It has value. If we don’t have some kind of regulation of the AI deepfake world, the counterfeit world will take over and eat reality. Nobody wants that.
Thiughtfull. Thank you.
We haven't even been able to deploy a ubiquitous public-key cryptosystem for authenticating e-mail senders and messages, so I'm skeptical that one for images is doable outside some very specific narrow niches. And even if you are successful doing so, you still have the "Analog Hole" where one can get around the protections by taking a picture of the image as it's displayed on the screen.
Related to this issue is all of the AI being built into the camera stack itself. The photos people are taking are no longer the values that camera sensor pixels registered, but AI-compiled edits of a variety of individual discrete images, combined with algorithms imputing and replacing image pixels in the name of "better looking" photos. The Verge has a great article on this in the context of Google's new Pixel Pro 8:
https://www.theverge.com/2023/10/7/23906753/google-pixel-8-pro-photo-editing-tools-ai