‘Liar’s dividend’: The more we learn about deepfakes, the more dangerous they become

barack obama jordan peele deepfake buzzfeed Paul Scharre views in his offices in Washington, DC January 25, 2019 a manipulated video by BuzzFeed with filmmaker Jordan Peele (R on screen) using readily available software and applications to change what is said by former president Barack Obama (L on screen), illustrating how deepfake technology can deceive viewers. - "Deepfake" videos that manipulate reality are becoming more sophisticated and realistic as a result of advances in artificial intelligence, creating a potential for new kinds of misinformation with devastating consequences. (Photo by Robert LEVER / AFP) / TO GO WITH AFP STORY by Rob LEVER "Misinformation woes may multiply with deepfake videos" (Photo credit should read ROBERT LEVER/AFP via Getty Images)
BuzzFeed enlisted the help of comedian and Barack Obama impersonator Jordan Peele to create a deepfake video of the former president.

  • Deepfakes are on the rise, and experts say the public needs to know the threat they pose.
  • But as people get used to them, it’ll be easier for bad actors to dismiss the truth as AI forgery.
  • Experts call that paradox the “liar’s dividend.” Here’s how it works and why it’s so dangerous.
  • See more stories on Insider’s business page.

In April 2018, BuzzFeed released a shockingly realistic video of a Barack Obama deepfake where the former president’s digital lookalike appeared to call his successor, Donald Trump, a “dips–t.”

At the time, as visually convincing as the AI creation was, the video’s shock value actually allowed people to more easily identify it as a fake. That, and BuzzFeed revealing later in the video that Obama’s avatar was voiced by comedian and Obama impersonator Jordan Peele.

BuzzFeed’s title for the clip – “You Won’t Believe What Obama Says In This Video! 😉” – also hinted at why even the most convincing deepfakes so quickly raise red flags. Because deepfakes are an extremely new invention and still a relatively rare sighting for many people, these digital doppelgängers stick out from the surrounding media landscape, forcing us to do a double-take.

But that won’t be true forever, because deepfakes and other “synthetic” media are becoming increasingly common in our feeds and For You Pages.

Hao Li, a deepfakes creator CEO and co-founder of Pinscreen, a startup that uses AI to create digital avatars, told Insider the number of deepfakes online is doubling “pretty much every six months,” most of them currently in pornography.

As they spread to the rest of the internet, it’s going to get exponentially harder to separate fact from fiction, according to Li and other experts.

“My biggest concern is not the abuse of deepfakes, but the implication of entering a world where any image, video, audio can be manipulated. In this world, if anything can be fake, then nothing has to be real, and anyone can conveniently dismiss inconvenient facts” as synthetic media, Hany Farid, an AI and deepfakes researcher and associate dean of UC Berkeley’s School of Information, told Insider.

That paradox is known as the “liar’s dividend,” a name given to it by law professors Danielle Citron and Robert Chesney.

Many of the harms that deepfakes can cause – such as deep porn, cyberbullying, corporate espionage, and political misinformation – stem from bad actors using deepfakes to “convince people that fictional things really occurred,” Citron and Chesney wrote in a 2018 research paper.

But, they added, “some of the most dangerous lies” could come from bad actors trying to “escape accountability for their actions by denouncing authentic video and audio as deep fakes.”

George Floyd deepfake conspiracy

One such attempt to exploit the liar’s dividend, though ultimately unsuccessful, happened last year after the video of George Floyd’s death went viral.

“That event could not have been dismissed as being unreal or not having happened, or so you would think,” Nina Schick, an expert on deepfakes and former advisor to Joe Biden, told Insider.

Yet only two weeks later, Dr. Winnie Hartstrong, a Republican congressional candidate who hoped to represent Missouri’s 1st District, posted a 23-page “report” pushing a conspiracy theory that Floyd had died years earlier and that someone had used deepfake technology to superimpose his face onto the body of an ex-NBA player to create a video to stir up racial tensions.

“Even I was surprised at how quickly this happened,” Schick said, adding “this wasn’t somebody on, like 4chan or like Reddit or some troll. This is a real person who is standing for public office.”

“In 2020, that didn’t gain that much traction. Only people like me and other deepfake researchers really saw that and were like, ‘wow,’ and kind of marked that as an interesting case study,” Schick said.

But fast-forward a few years, once the public becomes more aware of deepfakes and the “corrosion of the information ecosystem” that has already polarized politics so heavily, Schick said, “and you can see how very quickly even events like George Floyd’s death no longer are true unless you believe them to be true.”

GettyImages 1167464772
A comparison of an original and deepfake video of Facebook CEO Mark Zuckerberg.

Locking down deepfakes is impossible – inoculation is the next best bet

Citron and Chesney warned in their paper that the liar’s “dividend” – the payoff for bad actors who leverage the existence of deepfakes as cover for their bad behavior – will get even bigger as the public gets used to seeing deepfakes.

But banning deepfakes entirely could make the problem worse, according to Schick, who pointed to China, the only country with a national rule outlawing deepfakes.

“Let’s say some very problematic footage were to emerge from Xinjiang province, for instance, showing Uyghurs in the internment camps,” she said. “Now the central authority in China has the power to say, ‘well, this is a deepfake, and this is illegal.'”

Combined with Beijing’s control over the country’s internet, Schick said, “and you can see why this power to say what’s real and what’s not can be this very effective tool of coercion. You shape the reality.”

With an outright ban out of the question, the experts who spoke to Insider said a variety of technological, legal, regulatory, and educational approaches are needed.

“Ultimately, it’s also a little bit up to us as consumers to be inoculated against these kinds of techniques,” Li said, adding that people should approach social media with the same skepticism they would a tabloid, especially if it hasn’t been confirmed by multiple reliable news or other official sources.

Schick agreed, saying “there has to be kind of some society-wide resilience building” – not only around bad actors’ ability to use real deepfakes to spread fake news, but also around their ability to dismiss real news as the product of nonexistent deepfakes.

Read the original article on Business Insider

The Tom Cruise deepfakes were hard to create. But less sophisticated ‘shallowfakes’ are already wreaking havoc

tom cruise BURBANK, CA - JANUARY 30: Tom Cruise onstage during the 10th Annual Lumiere Awards at Warner Bros. Studios on January 30, 2019 in Burbank. (Photo by Michael Kovac/Getty Images for Advanced Imaging Society)
  • The convincing Tom Cruise deepfakes that went viral last month took lots of skill to create.
  • But less sophisticated “shallowfakes” and other synthetic media are already creating havoc.
  • DARPA’s AI experts mapped out how hard it would be to create these emerging types of fake media.
  • See more stories on Insider’s business page.

The coiffed hair, the squint, the jaw clench, and even the signature cackle – it all looks and sounds virtually indistinguishable from the real Tom Cruise.

But the uncanny lookalikes that went viral on TikTok last month under the handle @deeptomcruise were deepfakes, a collaboration between Belgian visual-effects artist Chris Ume and Tom Cruise impersonator Miles Fisher.

The content was entertaining and harmless, with the fake Cruise performing magic tricks, practicing his golf swing, and indulging in a Bubble Pop. Still, the videos – which have racked up an average of 5.6 million views each – reignited people’s fears about the dangers of the most cutting-edge type of fake media.

“Deepfakes seem to tap into a really visceral part of people’s minds,” Henry Ajder, a UK-based deepfakes expert, told Insider.

“When you watch that Tom Cruise deepfake, you don’t need an analogy because you’re seeing it with your own two eyes and you’re being kind of fooled even though you know it’s not real,” he said. “Being fooled is a very intimate experience. And if someone is fooled by a deepfake, it makes them sit up and pay attention.”

Read more: What is a deepfake? Everything you need to know about the AI-powered fake media

The good news: it’s really hard to make such a convincing deepfake. It took Ume two months to train the AI-powered tool that generated the deepfakes, 24 hours to edit each minute-long video, and a talented human impersonator to mimic the hair, body shape, mannerisms, and voice, according to The New York Times.

The bad news: it won’t be that hard for long, and major advances in the technology in recent years have unleashed a wave of apps and free tools that enable people with few skills or resources to create increasingly good deepfakes.

Nina Schick, a deepfake expert and former advisor to Joe Biden, told Insider this “rapid commodification of the technology” is already is wreaking havoc.

“Are you just really concerned about the high-fidelity side of this? Absolutely not,” Shick said, adding that working at the intersection of geopolitics and technology has taught her that “it doesn’t have to be terribly sophisticated for it to be effective and do damage.”

The Defense Advanced Research Projects Agency (DARPA) is well aware of this diverse landscape, and its Media Forensics (MediFor) team is working alongside private sector researchers to develop tools that can detect manipulated media, including deepfakes as well cheapfakes and shallowfakes.

As part of its research, DARPA’s MediFor team mapped out different types of synthetic media – and the level of skill and resources an individual, group, or an adversarial country would need to create it.

MediFor threat landscape.pptx

Hollywood-level productions – like those in “Star Wars: Rogue One” or “The Irishman” – require lots of resources and skill to create, even though they typically aren’t AI-powered (though Disney is experimenting with deepfakes). On the other end of the scale, bad actors with little training have used simple video-editing techniques to make House Speaker Nancy Pelosi appear drunk and incite violence in Ivory Coast, South Sudan, Kenya, and Burma.

Shick said the Facebook-fueled genocide against Rohingya Muslims also relied mostly on these so-called “cheapfakes” and “shallowfakes” – synthetic or manipulated media altered using less advanced, non-AI tools.

But deepfakes aren’t just being used to spread political misinformation, and experts told Insider ordinary people may have the most to lose if they become a target.

Last month, a woman was arrested in Pennsylvania and charged with cyber harassment on suspicion of making deepfake videos of teen cheerleaders naked and smoking, in an attempt to get them kicked off her daughter’s squad.

“It’s almost certain that we’re going to see some kind of porn version of this app,” Shick said. In a recent op-ed in Wired, she and Ajder wrote about a bot Ajder helped discover on Telegram that turned 100,000 user-provided photos of women and underage children into deepfake porn – and how app developers need to take proactive steps to prevent this kind of abuse.

Experts told Insider they’re particularly concerned about these types of cases because the victims often lack the money and status to set the record straight.

“The celebrity porn [deepfakes] have already come out, but they have the resources to protect themselves … the PR team, the legal team … millions of supporters,” Shick said. “What about everyone else?”

As with most new technologies, from facial recognition to social media to COVID-19 vaccines, women, people of color, and other historically marginalized groups tend to be disproportionately the victims of abuse and bias stemming from their use.

To counter the threat posed by deepfakes, experts say society needs a multipronged approach that includes government regulation, proactive steps by technology and social media companies, and public education about how to think critically and navigate our constantly evolving information ecosystem.

Read the original article on Business Insider