“This has huge consequences because the ability for anyone to create misinformation means it’s no longer just practice of the state — literally your angry teenager can create it and disseminate it as well.”
ASI’s Gibson said that, given the “viral” nature of the internet, where inaccurate news can spread like wildfire, and given the superiority of audiovisual content to written news, the likelihood of deepfakes being used to influence the 2020 presidential election — even in a minor way — was high.
“I’d be really surprised if, in 2020, there wasn’t a reasonable amount of background deepfaking going on,” he said. “The thing that I’m most worried about is not that Russia produces a picture-perfect Trump video; I think it will be at local level or an important swing state.”
The next step for the commission is the creation of a “toolkit” that enables it to spot deepfakes with ease, to give journalists the ability to detect deepfake material, and to educate the public about the technology. It is working with U.S. universities Stanford and Harvard, as well as London’s UCL, to build the deepfake-detection software, and is looking to roll it out in the next 12 months, Schick said.
“Disinformation is generally not illegal in a democracy, and we don’t want to inspire governments to move towards content-based regulation,” Donahoe said.
“The most effective tool for combating this information’s effect is to have the citizenry prepared and resilient to disinformation.”