Perhaps the scariest part of these Frankenstein-ish creations, these "deepfakes," is how easy they are to make, especially when the software for a specific application--such as pornography--is publicly available. A layman can simply plug sufficient photos or footage into pre-written code and produce a lifelike lie about his or her subject. Deepfakery is democratizing, and malicious actors, however unsophisticated, are increasingly able to harness it.
Deepfakes are also inherently hard to detect. The technology used to create them is trained in part with the same algorithms that distinguish fake content from real--so any strides in ferreting out false content will soon be weaponized to make that content more convincing.
This means online platforms have their police work cut out for them, though investment in staying one step ahead, along with algorithmic tweaks to demote untrustworthy sources and de-emphasize virality, will always be needed. Some suggest holding sites liable for the damages caused by deepfakes if companies do too little to remove dangerous content.
Like technical solutions, policy answers to the deepfake problem are elusive, but steps can be taken. Many harmful deepfakes are already illegal under copyright, defamation and other laws, but Congress should tweak existing fraud-related regulations to cover the technology explicitly--amping up penalties and bringing federal resources, as well as public attention, to bear on a devilish problem.
Humans have so far hardly had to think about what happens when someone else uses our faces. To avoid that wormhole of darkness, we will have start thinking hard.
Editorial on 01/08/2019
Print Headline: Watch for 'deepfakes'