The popularity of social media channels and the increasingly mature nature of artificial intelligence (AI) and machine learning have converged in a troubling trend: deepfake content. The material, defined by Fast Company’s Michael Grothaus as “media that has been created artificially using deep learning techniques,” has been the subject of much debate and concern over deepfakes is mounting in the runup to the 2020 U.S. presidential election.
To that end, Facebook recently announced a ban on deepfake videos, claiming “They present a significant challenge for our industry and society as their use increases.” Facebook’s ban comes on the heels of its Deepfake Detection Challenge (DFDC), in which the company partnered with Microsoft, AWS, and academics to “spur researchers around the world to build innovative new technologies that can help detect deepfakes and manipulated media.” With over 1,000 teams competing for the $1,000,000 prize money, the DFDC hoped to identify new means of significantly reducing—if not permanently eradicating—the proliferation of deepfake content today.
The premise of the DFDC was essentially to unite the AI community to harness technology to combat its use for nefarious purposes. Or as Andrew Ng tweeted, “To the AI Community: You have superpowers, and what you build matters. Please use your powers on worthy projects that move the world forward.”
But will the community heed Ng’s plea? And will current efforts to detect and thwart deepfakes succeed before these videos cause irreparable damage?
As the Washington Post’s Drew Harwell put it, “A disinformation campaign using deepfake videos probably would catch fire because of the reward structure of the modern Web, in which shocking material drives bigger audiences—and can spread further and faster than the truth.” Deepfakes have yet to lead to a true crisis in the U.S., but Harwell’s piece describes situations in Gabon and Malaysia in which deepfakes caused significant political turmoil.
Facebook’s deepfake ban is certainly a step in the right direction, but it also leaves some gray areas that could be exploited. For example, clips in which words were artificially reordered can still remain on the platform. And of course, the people and technology behind these videos are continuously honing their abilities to create deepfake content that is difficult to detect.
Harwell’s Washington Post piece has some interesting details on current initiatives in addition to the DFDC that are underway to combat deepfakes; hopefully these and other projects will outplace deepfake creators in their applications of AI.