A viral clip of an interaction between CNN reporter Jim Acosta and a White House intern has sparked an intense online debate over whether it was doctored—a harbinger of the polarization that’s likely to follow if manufactured videos known as “deep fakes” become widespread.
The clip, a tweaked version of a video showing Acosta and a White House staffer who attempted to take his microphone at a press conference Wednesday, was posted by Paul Joseph Watson of conspiracy site Infowars as evidence that Acosta acted aggressively. White House Press Secretary Sarah Huckabee Sanders then posted the video on Twitter, saying it backed up the White House’s decision to revoke Acosta’s White House press pass.
We stand by our decision to revoke this individual’s hard pass. We will not tolerate the inappropriate behavior clearly documented in this video. pic.twitter.com/T8X1Ng912y
— Sarah Sanders (@PressSec) November 8, 2018
Sanders’ tweet of content from Infowars was hardly proof Acosta acted aggressively, said CNN and other news sites, which claimed that the clip had been manipulated. They pointed to the graininess and speed of the video as evidence that it was edited to make Acosta appear aggressive, as if he were pushing the staffer away.
In a Youtube video posted Thursday, Watson says he didn’t doctor or speed up the video. Instead, any differences can be attributed to the “video compression” that happened when he took the clip from a Daily Wire tweet and zoomed in to make the clip.
As anyone who’s made a GIF knows, turning a part of a video into a GIF can distort the quality of a clip. While Watson’s clip appears to be sped up, it could be because of the video-to-GIF conversion, an analysis from Buzzfeed News showed.
Whether that compression was intentional or merely convenient isn’t clear. But the furor it engendered, fueled in part by the White House’s decision to share material from a site whose founder has suggested the 2012 Sandy Hook Elementary School shooting didn’t happen, reflected a growing concern that purposefully doctored videos will become more widespread.
In the wake of revelations that Russian trolls spread spurious articles and videos on social media to stir dissent during the 2016 U.S. presidential election, social media and cybersecurity experts have warned of a coming wave of disinformation built on what’s known as “deep fakes.”
These go well beyond video clips using standard video editing technology such as Adobe Premier or Sony Vegas Pro, which Watson said he used to make his clip. Instead, these deep fakes use artificial intelligence and machine learning to edit images and videos that bring them as close to reality as possible.
Deep fakes began on Reddit as a means for people to incorporate celebrities into pornography. Though Reddit has shut down the thread, deep fakes have caught attention as a way to distort reality.
Earlier this year, comedian Jordan Peele and Buzzfeed teamed up to release an eerie deep fake video made to look like a PSA about fake news from former president Barack Obama. Something’s off about the clip—Obama looks too shiny, his edges seem a bit blurred—but without close attention, it looks genuine. About halfway through the video, a split screen reveals Peele speaking in perfect unison with Obama.
Bobby Chesney, a professor at the University of Texas School of Law who has studied the potential impact of deep fake technology, says the clip Sanders shared could potentially be a “cheap fake”—something that is easier to achieve and spot than sophisticated deep fake technology.
“Cheap fakes have been with us for a long time. There’s nothing novel about them,” Chesney said. “But it’s easy to move from that to a situation where there’s a true deep fake, that is much more reasonable to believe is real, being amplified by very credible sources.”
Chesney said that the share of the Acosta clip, if it was altered to mislead, isn’t even a true warning of what is to come because people have original versions of the video. If and when deep fake technology becomes more prevalent, they will be easier to make and harder to spot for humans and computers alike.
“The more aggressive and outlandish that those in political power are willing to be to manipulate the public, they move further and further away from the truth,” said Jonathon Morgan, CEO and cofounder of cybersecurity company New Knowledge. “Deep (fake) things will continue to improve. At some point, … a plausible-looking deep fake will be seen as a legitimate weapon in an information war,” he said.
In conjunction with the development of technology that could be used to ignite misinformation like deep fake, companies are developing and investing in solutions. Serelay, a U.K.-based software company, is developing “trusted media capture”—technology that could provide a virtual notary for every pixel in a photo or video. Google’s Digital News Initiative and the European Space Agency are investors.
Roy Azoulay, founder and CEO of Serelay, says the company is still tweaking the software, but they’re hoping to find a solution to prevent people from losing trust in content published on the web.
“There’s one of two roads. One road is that we come to a place where we can’t trust any video we see,” Azoulay said. “The second road is what we’re hoping to find a solution for… making images and videos inherently verifiable.”
More Info: forbes.com