Right-wing media, most prominently Fox News, has promoted three major false stories in just the last few days.
Last Friday, The New York Post published a cover story claiming that copies of Vice President Kamala Harris’ 2019 children’s book, “Superheroes Are Everywhere,” was being gifted to migrant children at a Department of Health and Human Services shelter in Long Beach, California. The Post provided no evidence for the claim aside from a single Reuters photograph of Harris’ book propped against a backpack on a table.
The story was picked up by a host of right-wing media, including Fox News, which co-authored a follow-up story with The Post reporter, Laura Italiano, who wrote the original piece. A slew of prominent Republican lawmakers, including GOP Sen. Tom Cotton and Republican National Committee Chair Ronna McDaniel, promoted the Post’s story.
But the Post’s story was quickly debunked. The Washington Post fact-checker, which gave the Post story “four Pinocchios” on Tuesday, reported that The Post based its entire story on a photo of one copy of the book donated to the shelter by a community member. The Post deleted its two stories on the matter and later republished them with corrections and editor’s notes added.
Spokespeople for Fox News did not respond to Insider’s comment about the network’s reporting, but the network quietly added an editor’s note to its story about the White House’s response to the Post’s reporting and deleted Italiano’s byline.
Also last Friday, Fox News ran multiple segments falsely claiming that President Joe Biden’s administration would require Americans to radically reduce their red meat consumption under Biden’s climate policy. Fox’s on-air discussions relied on a Daily Mail story that reported the Biden administration “could” require Americans to cut their red meat consumption by 90%, citing academic studies showing that reductions in animal products help cut greenhouse gas emissions.
In reality, Biden has no plan to require Americans eat less red meat.
Fox chyrons read, “Bye-Bye Burgers Under Biden’s Climate plan” and “90% of Red Meat Out With Biden Climate Plan.” One graphic falsely stated that “Biden’s climate requirements” include a maximum of four pounds of red meat consumption a year and “one burger per month.” A slew of conservative lawmakers promoted the false claims and lashed out at the Biden administration. Donald Trump Jr. claimed he’d likely eaten four pounds of red meat the previous day.
“Joe Biden’s climate plan includes cutting 90% of red meat from our diets by 2030. They want to limit us to about four pounds a year. Why doesn’t Joe stay out of my kitchen?” Colorado Rep. Lauren Boebert tweeted on Saturday.
Fox host John Roberts acknowledged in a brief on-air correction on Monday that the claims were wrong. Roberts said the network’s graphic and script “incorrectly implied” that reducing red meat consumption “was part of Biden’s plan for dealing with climate change.”
Fox and other right-wing media also ran with a story that Virginia’s public schools were moving to eliminate accelerated high school math courses to improve racial equity, “effectively keeping higher-achieving students from advancing as they usually would in the school system.”
The stories, which were amplified by Fox’s opinion side, were false and overblown. Virginia’s superintendent of public instruction, James Lane, told The Washington Post that the state’s department of education is beginning a regular evaluation of its math curriculum and is not eliminating any advanced classes.
In April 2018, BuzzFeed released a shockingly realistic video of a Barack Obama deepfake where the former president’s digital lookalike appeared to call his successor, Donald Trump, a “dips–t.”
At the time, as visually convincing as the AI creation was, the video’s shock value actually allowed people to more easily identify it as a fake. That, and BuzzFeed revealing later in the video that Obama’s avatar was voiced by comedian and Obama impersonator Jordan Peele.
BuzzFeed’s title for the clip – “You Won’t Believe What Obama Says In This Video! 😉” – also hinted at why even the most convincing deepfakes so quickly raise red flags. Because deepfakes are an extremely new invention and still a relatively rare sighting for many people, these digital doppelgängers stick out from the surrounding media landscape, forcing us to do a double-take.
But that won’t be true forever, because deepfakes and other “synthetic” media are becoming increasingly common in our feeds and For You Pages.
Hao Li, a deepfakes creator CEO and co-founder of Pinscreen, a startup that uses AI to create digital avatars, told Insider the number of deepfakes online is doubling “pretty much every six months,” most of them currently in pornography.
As they spread to the rest of the internet, it’s going to get exponentially harder to separate fact from fiction, according to Li and other experts.
“My biggest concern is not the abuse of deepfakes, but the implication of entering a world where any image, video, audio can be manipulated. In this world, if anything can be fake, then nothing has to be real, and anyone can conveniently dismiss inconvenient facts” as synthetic media, Hany Farid, an AI and deepfakes researcher and associate dean of UC Berkeley’s School of Information, told Insider.
But, they added, “some of the most dangerous lies” could come from bad actors trying to “escape accountability for their actions by denouncing authentic video and audio as deep fakes.”
George Floyd deepfake conspiracy
One such attempt to exploit the liar’s dividend, though ultimately unsuccessful, happened last year after the video of George Floyd’s death went viral.
“That event could not have been dismissed as being unreal or not having happened, or so you would think,” Nina Schick, an expert on deepfakes and former advisor to Joe Biden, told Insider.
Yet only two weeks later, Dr. Winnie Hartstrong, a Republican congressional candidate who hoped to represent Missouri’s 1st District, posted a 23-page “report” pushing a conspiracy theory that Floyd had died years earlier and that someone had used deepfake technology to superimpose his face onto the body of an ex-NBA player to create a video to stir up racial tensions.
“Even I was surprised at how quickly this happened,” Schick said, adding “this wasn’t somebody on, like 4chan or like Reddit or some troll. This is a real person who is standing for public office.”
“In 2020, that didn’t gain that much traction. Only people like me and other deepfake researchers really saw that and were like, ‘wow,’ and kind of marked that as an interesting case study,” Schick said.
But fast-forward a few years, once the public becomes more aware of deepfakes and the “corrosion of the information ecosystem” that has already polarized politics so heavily, Schick said, “and you can see how very quickly even events like George Floyd’s death no longer are true unless you believe them to be true.”
Locking down deepfakes is impossible – inoculation is the next best bet
Citron and Chesney warned in their paper that the liar’s “dividend” – the payoff for bad actors who leverage the existence of deepfakes as cover for their bad behavior – will get even bigger as the public gets used to seeing deepfakes.
“Let’s say some very problematic footage were to emerge from Xinjiang province, for instance, showing Uyghurs in the internment camps,” she said. “Now the central authority in China has the power to say, ‘well, this is a deepfake, and this is illegal.'”
With an outright ban out of the question, the experts who spoke to Insider said a variety of technological, legal, regulatory, and educational approaches are needed.
“Ultimately, it’s also a little bit up to us as consumers to be inoculated against these kinds of techniques,” Li said, adding that people should approach social media with the same skepticism they would a tabloid, especially if it hasn’t been confirmed by multiple reliable news or other official sources.
Schick agreed, saying “there has to be kind of some society-wide resilience building” – not only around bad actors’ ability to use real deepfakes to spread fake news, but also around their ability to dismiss real news as the product of nonexistent deepfakes.
The coiffed hair, the squint, the jaw clench, and even the signature cackle – it all looks and sounds virtually indistinguishable from the real Tom Cruise.
But the uncanny lookalikes that went viral on TikTok last month under the handle @deeptomcruise were deepfakes, a collaboration between Belgian visual-effects artist Chris Ume and Tom Cruise impersonator Miles Fisher.
The content was entertaining and harmless, with the fake Cruise performing magic tricks, practicing his golf swing, and indulging in a Bubble Pop. Still, the videos – which have racked up an average of 5.6 million views each – reignited people’s fears about the dangers of the most cutting-edge type of fake media.
“Deepfakes seem to tap into a really visceral part of people’s minds,” Henry Ajder, a UK-based deepfakes expert, told Insider.
“When you watch that Tom Cruise deepfake, you don’t need an analogy because you’re seeing it with your own two eyes and you’re being kind of fooled even though you know it’s not real,” he said. “Being fooled is a very intimate experience. And if someone is fooled by a deepfake, it makes them sit up and pay attention.”
The good news: it’s really hard to make such a convincing deepfake. It took Ume two months to train the AI-powered tool that generated the deepfakes, 24 hours to edit each minute-long video, and a talented human impersonator to mimic the hair, body shape, mannerisms, and voice, according to The New York Times.
The bad news: it won’t be that hard for long, and major advances in the technology in recent years have unleashed a wave of apps and free tools that enable people with few skills or resources to create increasingly good deepfakes.
Nina Schick, a deepfake expert and former advisor to Joe Biden, told Insider this “rapid commodification of the technology” is already is wreaking havoc.
“Are you just really concerned about the high-fidelity side of this? Absolutely not,” Shick said, adding that working at the intersection of geopolitics and technology has taught her that “it doesn’t have to be terribly sophisticated for it to be effective and do damage.”
The Defense Advanced Research Projects Agency (DARPA) is well aware of this diverse landscape, and its Media Forensics (MediFor) team is working alongside private sector researchers to develop tools that can detect manipulated media, including deepfakes as well cheapfakes and shallowfakes.
As part of its research, DARPA’s MediFor team mapped out different types of synthetic media – and the level of skill and resources an individual, group, or an adversarial country would need to create it.
Shick said the Facebook-fueled genocide against Rohingya Muslims also relied mostly on these so-called “cheapfakes” and “shallowfakes” – synthetic or manipulated media altered using less advanced, non-AI tools.
But deepfakes aren’t just being used to spread political misinformation, and experts told Insider ordinary people may have the most to lose if they become a target.
Last month, a woman was arrested in Pennsylvania and charged with cyber harassment on suspicion of making deepfake videos of teen cheerleaders naked and smoking, in an attempt to get them kicked off her daughter’s squad.
“It’s almost certain that we’re going to see some kind of porn version of this app,” Shick said. In a recent op-ed in Wired, she and Ajder wrote about a bot Ajder helped discover on Telegram that turned 100,000 user-provided photos of women and underage children into deepfake porn – and how app developers need to take proactive steps to prevent this kind of abuse.
Experts told Insider they’re particularly concerned about these types of cases because the victims often lack the money and status to set the record straight.
“The celebrity porn [deepfakes] have already come out, but they have the resources to protect themselves … the PR team, the legal team … millions of supporters,” Shick said. “What about everyone else?”
As with most new technologies, from facial recognition to social media to COVID-19 vaccines, women, people of color, and other historically marginalized groups tend to be disproportionately the victims of abuse and bias stemming from their use.
To counter the threat posed by deepfakes, experts say society needs a multipronged approach that includes government regulation, proactive steps by technology and social media companies, and public education about how to think critically and navigate our constantly evolving information ecosystem.
Fitness instructor Shauna Harrison’s Instagram feed consists of simple workout routines and yoga stretches she shares with her 84,000 followers.
Occasionally, though, Harrison, who has a doctorate in public health, will share photos of herself wearing masks that say “Talk Data to Me” with captions relaying the importance of staying home and social distancing.
“I know I get some heat on here for promoting masks and supporting Black Lives and LGBTQIA rights and vaccines,” Harrison wrote in one caption. “I’m here to promote health, to promote wellness. Which inherently includes protecting the rights and lives of marginalized people.”
Harrison is part of a “network of micro-influencers” who have partnered with data scientists at Public Good Projects, a public health communication non-profit, to help spread correct vaccine or COVID-19 information.
Fake claims about COVID-19 have spread on social media throughout the pandemic, complicating public health practices. Messages telling people not to wear masks – despite overwhelming scientific evidence that the face covering can slow COVID-19 transmission – have snowballed on Facebook and other social media platforms. In April, trolls and bots flooded social media with hashtags encouraging anti-quarantine messages, while some Americans held protests demanding states re-open businesses.
Now, as the US ramps up vaccine distribution, experts warn misinformation could hinder widespread immunization. Facebook removed a post falsely claiming the COVID-19 vaccine would lead to infertility after it already garnered hundreds of shares.
Joe Smyser, the CEO of Public Good Projects, said monitoring COVID-19 misinformation over the last nine months had been “overwhelming and at times exhausting.” Smyser said the lack of coherent messaging on COVID-19 vaccines has created a “vacuum,” allowing fake claims to reach Americans on social media.
“Right now the volume of information about vaccines, but also just about public health policies and the pandemic in general, the volume is much bigger on the bad side of things than the good side of things,” Smyser told Business Insider. “There’s more misinformation than there is truth.”
Data scientists and influencers are working together to combat COVID-19 vaccine misinformation.
PGP began tracking vaccine hesitancy on social media last year, and created a complementary system to track misinformation related to COVID-19 once the pandemic began in 2020. Data scientists track which false claims could harm public health, and work with public health experts make a rebuttal to debunk the misinformation on social media.
Smyser said PGP selects microinfluencers based on their audience. The team seeks audiences with high rates of vaccine hesitancy based on past research. The influencers PGP works with include fashion and beauty influencers, mommy bloggers, and music creators.
“Some of the people we work with have a health background, but most are just average everyday people who, for their own reasons, have more influence where they live than other people,” Smyser said. “We find that the way that’s most effective to communicate with people is a non-health expert saying something in whatever way they want to say it.”
Smyser said a danger to sharing vaccine information online is the “global network” of conspiracy theorist groups that monitor hashtags used by major public health agencies and flood posts using them with fake claims. Harrison, the fitness influencer, said she’s had to combat “trolls” on posts about wearing masks.
“They’re just looking through hashtags looking for people who are posting these things and they come and they start throwing their 2 cents into your comments, saying that COVID is not real or that masks are going to cause breathing,” Harrison told Business Insider. “There’s a million things that they say that don’t make any sense.”
Smyser said people against vaccines and other public health tools organized into a political movement in 2020. Just 129 accounts are predominantly responsible for misinformation about COVID-19 vaccines on Twitter, according to peer-reviewed PGP data.
Anatoliy Gruzd, director of research at the Social Media Lab, which tracks the spread of debunked COVID-19 claims on social media, said even though a “small percentage” of bad actors create misinformation online, fake claims spread quickly from regular social media users who can easily circulate messages they do not double check. According to Social Media Lab, fake claims on social media spiked starting December 1, around when vaccines began receiving authorization from regulators.
Good Samaritans have started to understand how positive vaccine information can reach people online, and are allocating their own resources to inform the public.
Unlike Harrison, Rob Swanda doesn’t see himself as an influencer, but he has also devoted his free time to spreading good information on COVID-19.
The video went viral on Twitter, amassing 134,000 likes and 44,000 retweets.
He originally made the video to explain how the Pfizer and Moderna vaccines worked to his grandmother, and after hearing her and his own parents share misinformation, like that the COVID-19 mRNA vaccine will inject people with the coronavirus or mutate cells.
Swanda estimates his video took off due to the fact he used a simple whiteboard to explain the vaccine rather than a complicated graphic design.
“I think there’s a big challenge in terms of making the information come across accessible,” Swanda said, adding that scientists sometimes struggle with explaining complicated research in layman terms.
Visuals, like the ones Swanda and Harrison used to spread correct COVID-19 information, can help skeptical people understand facts around COVID-19, according to Emma Frances Bloomfield, an assistant professor of communication studies at the University of Nevada, Las Vegas. Bloomfield has studied how to engage with climate change and COVID-19 skeptics to relay correct, scientific information.
Bloomfield said leveraging personal relationships with friends works best when conveying facts about science and public health. People who are predisposed to doubt authority will trust non-political sources of information, like social media influencers, Bloomfield said.
“Actually having people get the vaccine is going to be the crucial thing,” Harrison said. “There’s a lot of different reasons why people are nervous about that or against it. I think that’s the biggest hurdle right now.”