A top Facebook exec told a whistleblower her concerns about widespread state-sponsored disinformation meant she had ‘job security’

facebook ceo mark zuckerberg
In this April 11, 2018, file photo, Facebook CEO Mark Zuckerberg pauses while testifying before a House Energy and Commerce hearing on Capitol Hill in Washington.

  • Facebook let dictators generate fake support despite employees’ warnings, the Guardian reported.
  • Whistleblower Sophie Zhang repeatedly raised concerns to integrity chief Guy Rosen and other execs.
  • But Rosen said the amount of disinformation on the platform meant “job security” for Zhang.
  • See more stories on Insider’s business page.

Facebook allowed authoritarian governments to use its platform to generate fake support for their regimes for months despite warnings from employees about the disinformation campaigns, an investigation from the Guardian revealed this week.

A loophole in Facebook’s policies allowed government officials around the world to create unlimited amounts of fake “pages” which, unlike user profiles, don’t have to correspond to an actual person – but could still like, comment on, react to, and share content, the Guardian reported.

That loophole let governments spin up armies of what looked like real users who could then artificially generate support for and amplify pro-government content, what the Guardian called “the digital equivalent of bussing in a fake crowd for a speech.”

Sophie Zhang, a former Facebook data scientist on the company’s integrity team, blew the whistle dozens of times about the loophole, warning Facebook executives including vice president of integrity Guy Rosen, airing many of her concerns, according to the Guardian.

BuzzFeed News previously reported on Zhang’s “badge post” – a tradition where departing employees post an internal farewell message to coworkers.

But one of Zhang’s biggest concerns was that Facebook wasn’t paying enough attention to coordinated disinformation networks in authoritarian countries, such as Honduras and Azerbaijan, where elections are less free and more susceptible to state-sponsored disinformation campaigns, the Guardian’s investigation revealed.

Facebook waited 344 days after employees sounded the alarm to take action in the Honduras case, and 426 days in Azerbaijan, and in some cases took no action, the investigation found.

But when she raised her concerns about Facebook’s inaction in Honduras to Rosen, he dismissed her concerns.

“We have literally hundreds or thousands of types of abuse (job security on integrity eh!),” Rosen told Zhang in April 2019, according the Guardian, adding: “That’s why we should start from the end (top countries, top priority areas, things driving prevalence, etc) and try to somewhat work our way down.”

Rosen told Zhang he agreed with Facebook’s priority areas, which included the US, Western Europe, and “foreign adversaries such as Russia/Iran/etc,” according to the Guardian.

“We fundamentally disagree with Ms. Zhang’s characterization of our priorities and efforts to root out abuse on our platform. We aggressively go after abuse around the world and have specialized teams focused on this work,” Facebook spokesperson Liz Bourgeois told Insider in a statement.

“As a result, we’ve already taken down more than 100 networks of coordinated inauthentic behavior. Around half of them were domestic networks that operated in countries around the world, including those in Latin America, the Middle East and North Africa, and in the Asia Pacific region. Combatting coordinated inauthentic behavior is our priority. We’re also addressing the problems of spam and fake engagement. We investigate each issue before taking action or making public claims about them,” she said.

However, Facebook didn’t dispute any of Zhang’s factual claims in the Guardian investigation.

Facebook pledged to tackle election-related misinformation and disinformation after the Cambridge Analytica scandal and Russia’s use of its platform to sow division among American voters ahead of the 2016 US presidential elections.

“Since then, we’ve focused on improving our defenses and making it much harder for anyone to interfere in elections,” CEO Mark Zuckerberg wrote in a 2018 op-ed for The Washington Post.

“Key to our efforts has been finding and removing fake accounts – the source of much of the abuse, including misinformation. Bad actors can use computers to generate these in bulk. But with advances in artificial intelligence, we now block millions of fake accounts every day as they are being created so they can’t be used to spread spam, false news or inauthentic ads,” Zuckerberg added.

But the Guardian’s investigation showed Facebook is still delaying or refusing to take action against state-sponsored disinformation campaigns in dozens of countries, with thousands of fake accounts, creating hundreds of thousands of fake likes.

And even in supposedly high-priority areas, like the US, researchers have found Facebook has allowed key disinformation sources to expand their reach over the years.

A March report from Avaaz found “Facebook could have prevented 10.1 billion estimated views for top-performing pages that repeatedly shared misinformation” ahead of the 2020 US elections had it acted earlier to limit their reach.

“Failure to downgrade the reach of these pages and to limit their ability to advertise in the year before the election meant Facebook allowed them to almost triple their monthly interactions, from 97 million interactions in October 2019 to 277.9 million interactions in October 2020,” Avaaz found.

Facebook admits that around 5% of its accounts are fake, a number that hasn’t gone down since 2019, according to The New York Times. And MIT Technology Review’s Karen Hao reported in March that Facebook still doesn’t have a centralized team dedicated to ensuring its AI systems and algorithms reduce the spread of misinformation.

Read the original article on Business Insider

‘Liar’s dividend’: The more we learn about deepfakes, the more dangerous they become

barack obama jordan peele deepfake buzzfeed Paul Scharre views in his offices in Washington, DC January 25, 2019 a manipulated video by BuzzFeed with filmmaker Jordan Peele (R on screen) using readily available software and applications to change what is said by former president Barack Obama (L on screen), illustrating how deepfake technology can deceive viewers. - "Deepfake" videos that manipulate reality are becoming more sophisticated and realistic as a result of advances in artificial intelligence, creating a potential for new kinds of misinformation with devastating consequences. (Photo by Robert LEVER / AFP) / TO GO WITH AFP STORY by Rob LEVER "Misinformation woes may multiply with deepfake videos" (Photo credit should read ROBERT LEVER/AFP via Getty Images)
BuzzFeed enlisted the help of comedian and Barack Obama impersonator Jordan Peele to create a deepfake video of the former president.

  • Deepfakes are on the rise, and experts say the public needs to know the threat they pose.
  • But as people get used to them, it’ll be easier for bad actors to dismiss the truth as AI forgery.
  • Experts call that paradox the “liar’s dividend.” Here’s how it works and why it’s so dangerous.
  • See more stories on Insider’s business page.

In April 2018, BuzzFeed released a shockingly realistic video of a Barack Obama deepfake where the former president’s digital lookalike appeared to call his successor, Donald Trump, a “dips–t.”

At the time, as visually convincing as the AI creation was, the video’s shock value actually allowed people to more easily identify it as a fake. That, and BuzzFeed revealing later in the video that Obama’s avatar was voiced by comedian and Obama impersonator Jordan Peele.

BuzzFeed’s title for the clip – “You Won’t Believe What Obama Says In This Video! ūüėČ” – also hinted at why even the most convincing deepfakes so quickly raise red flags. Because deepfakes are an extremely new invention and still a relatively rare sighting for many people, these digital doppelg√§ngers stick out from the surrounding media landscape, forcing us to do a double-take.

But that won’t be true forever, because deepfakes and other “synthetic” media are becoming increasingly common in our feeds and For You Pages.

Hao Li, a deepfakes creator CEO and co-founder of Pinscreen, a startup that uses AI to create digital avatars, told Insider the number of deepfakes online is doubling “pretty much every six months,” most of them currently in pornography.

As they spread to the rest of the internet, it’s going to get exponentially harder to separate fact from fiction, according to Li and other experts.

“My biggest concern is not the abuse of deepfakes, but the implication of entering a world where any image, video, audio can be manipulated. In this world, if anything can be fake, then nothing has to be real, and anyone can conveniently dismiss inconvenient facts” as synthetic media, Hany Farid, an AI and deepfakes researcher and associate dean of UC Berkeley’s School of Information, told Insider.

That paradox is known as the “liar’s dividend,” a name given to it by law professors Danielle Citron and Robert Chesney.

Many of the harms that deepfakes can cause – such as deep porn, cyberbullying, corporate espionage, and political misinformation – stem from bad actors using deepfakes to “convince people that fictional things really occurred,” Citron and Chesney wrote in a 2018 research paper.

But, they added, “some of the most dangerous lies” could come from bad actors trying to “escape accountability for their actions by denouncing authentic video and audio as deep fakes.”

George Floyd deepfake conspiracy

One such attempt to exploit the liar’s dividend, though ultimately unsuccessful, happened last year after the video of George Floyd’s death went viral.

“That event could not have been dismissed as being unreal or not having happened, or so you would think,” Nina Schick, an expert on deepfakes and former advisor to Joe Biden, told Insider.

Yet only two weeks later, Dr. Winnie Hartstrong, a Republican congressional candidate who hoped to represent Missouri’s 1st District, posted a 23-page “report” pushing a conspiracy theory that Floyd had died years earlier and that someone had used deepfake technology to superimpose his face onto the body of an ex-NBA player to create a video to stir up racial tensions.

“Even I was surprised at how quickly this happened,” Schick said, adding “this wasn’t somebody on, like 4chan or like Reddit or some troll. This is a real person who is standing for public office.”

“In 2020, that didn’t gain that much traction. Only people like me and other deepfake researchers really saw that and were like, ‘wow,’ and kind of marked that as an interesting case study,” Schick said.

But fast-forward a few years, once the public becomes more aware of deepfakes and the “corrosion of the information ecosystem” that has already polarized politics so heavily, Schick said, “and you can see how very quickly even events like George Floyd’s death no longer are true unless you believe them to be true.”

GettyImages 1167464772
A comparison of an original and deepfake video of Facebook CEO Mark Zuckerberg.

Locking down deepfakes is impossible – inoculation is the next best bet

Citron and Chesney warned in their paper that the liar’s “dividend” – the payoff for bad actors who leverage the existence of deepfakes as cover for their bad behavior – will get even bigger as the public gets used to seeing deepfakes.

But banning deepfakes entirely could make the problem worse, according to Schick, who pointed to China, the only country with a national rule outlawing deepfakes.

“Let’s say some very problematic footage were to emerge from Xinjiang province, for instance, showing Uyghurs in the internment camps,” she said. “Now the central authority in China has the power to say, ‘well, this is a deepfake, and this is illegal.'”

Combined with Beijing’s control over the country’s internet, Schick said, “and you can see why this power to say what’s real and what’s not can be this very effective tool of coercion. You shape the reality.”

With an outright ban out of the question, the experts who spoke to Insider said a variety of technological, legal, regulatory, and educational approaches are needed.

“Ultimately, it’s also a little bit up to us as consumers to be inoculated against these kinds of techniques,” Li said, adding that people should approach social media with the same skepticism they would a tabloid, especially if it hasn’t been confirmed by multiple reliable news or other official sources.

Schick agreed, saying “there has to be kind of some society-wide resilience building” – not only around bad actors’ ability to use real deepfakes to spread fake news, but also around their ability to dismiss real news as the product of nonexistent deepfakes.

Read the original article on Business Insider

Christians are increasingly falling victim to online falsehoods. Meet the librarian leading the fight against conspiracy theories and health misinformation.

church mask phone
A worshipper wearing a protective face mask checks her smartphone while attending a Sunday service on May 10, 2020.

  • A librarian is leading the church’s fight against misinformation by holding online media workshops.
  • Rachel Wightman is teaching church congregants to identify fake news, and fact-check their sources.
  • White evangelicals are more likely to believe in conspiracy theories, recent polls show.
  • See more stories on Insider’s business page.

A local librarian from Minnesota is leading the fight against misinformation in Christian churches by holding online media training for congregations across the country.

Rachel Wightman works full-time at Concordia University, St.Paul, but started teaching six-week seminars part-time after watching a worrying number of people in her community became misguided by online misinformation.

The presidential election prompted Wightman to give her first workshop at her local Mill City Church in Minneapolis in early 2020. But the coronavirus pandemic paired with the Black Lives Matter protests made her workshops a lot more pertinent, so she decided to organize more.

The BLM movement, in particular, hit close to home – Wightman’s church was only a few miles away from where George Floyd was killed, and for most of the summer months, the city was gripped by protests.

Read more: HOLY WAR: A California pastor who believes ventilators are killing people is holding massive, mask-free services – and he refuses to shut down

“I remember the day our pastor was talking about racism and saying we have to check our inputs, meaning we have to get inputs from people who are different in order to understand this issue,” Wightman told Insider. “That was the moment for me where it really clicked. I knew I had to continue giving people tools to get to these inputs.”

In the last few weeks, the librarian has become inundated with requests from other pastors from around the US asking her to give her workshops to their congregations.

The need for such training seems to be as urgent as ever: Over the last year, churches have become engulfed in conspiracy theories and health misinformation, in some cases even prompting pastors to leave their congregations.

Recent polls show that white evangelicals have one of the highest levels of vaccine skepticism in the United States. According to a Washington Post/ABC News poll published in January, just under a third of US adults say they will probably or definitely not get the vaccine, compared to 44% of those who identify as white evangelicals.

Another poll by the Christian research organization Lifeway Research found that more than 45% of protestant pastors said they had often heard congregants repeating conspiracy theories.

“As a librarian, I’m seeing this huge information landscape every day, and I feel like it’s incredibly overwhelming for people,” Wightman said. “We’ve all spent this past year in this hyped-up environment where everything feels urgent and stressful, so I try to encourage people to take some space and say: ‘Okay, I’m going to figure out how to slow down and make sense of everything around me.'”

rachel wightman librarian
Rachel Wightman, Associate Director for Instruction and Outreach at Concordia University in St. Paul, Minnesota.

Due to the pandemic, Wightman meets most of her students on Zoom. Together they talk about everything from how to identify fake photographs, the ways in which algorithms work, fact-checking sources, and how to avoid being judgmental when friends post something inaccurate online.

Wightman stressed that while the training is a good space to talk about all the information people find online, it is also “politically neutral.”

“We’re not here to talk about your opinion on the latest legislation or our president. We are here to talk about how do you evaluate what you’re finding online … and how that overlaps with your faith,” she said.

For the librarian, it is also important to keep faith at the center of her teachings.

“I want to also bring in this perspective of Christianity. As Christians, we need to ask ourselves, if you have this faith of loving your neighbors, in what spaces does your faith show up?'”

The librarian said her workshops had been received well by many churchgoers, who vary in age and race. Many are also taking the training to help family members who have succumbed to online misinformation, Wightman said.

trump capitol religion
Pro-Trump supporters storm the U.S. Capitol following a rally with President Donald Trump on January 6, 2021 in Washington, DC.

Dr. Christopher Douglas, a professor of English at the University of Victoria, specializing in Christian literature, politics, and epistemology, thinks having training is essential in this day and age.

“Misinformation is in some sense baked into white evangelical churches as many of them reject science, scholarship, and mainstream journalism,” Douglas told Insider. “It’s a small step from disputing the science of evolution and climate change to doubting the efficacy of masks and vaccines in fighting the pandemic because it all comes from a common source, which is mainstream ‘secular’ science.”

Douglas believes 2020’s pandemic and election exacerbated this problem as many feel like their political opponents are trying to “destroy Christian America and to take away what they call their ‘religious freedoms.'”

This is why Christian churches need training like Wightman’s, Douglas said. “Public institutions like libraries, colleges, and universities all have a role to play in developing critical thinking and critical media literacy skills,” he said.

Even though Wightman is balancing her new work with a full-time job, she said she’s proud of what she’s accomplished so far and hopes to continue doing more workshops in the future.

“A lot of people think librarians just sit around and read all day, so it’s been fun to bust that myth open a bit,” said Wightman. “We’re teachers, we’re about connecting people with information, and so be able to do that in a new way that feels so relevant is very exciting.”

Read the original article on Business Insider

Mark Zuckerberg claimed the reason Facebook keeps showing up in Capitol riot lawsuits is because it’s really helpful to police

zuckerberg congress WASHINGTON, DC - NOVEMBER 17: Facebook CEO Mark Zuckerberg testifies remotely during a Senate Judiciary Committee hearing titled, "Breaking the News: Censorship, Suppression, and the 2020 Election" on Capitol Hill on November 17, 2020 in Washington, DC. Twitter CEO Jack Dorsey is also scheduled to testify remotely. (Photo by Hannah McKay-Pool/Getty Images)
Facebook CEO Mark Zuckerberg testifies before Congress remotely during a Senate hearing on November 17, 2020.

  • Facebook was cited in more legal docs about the Capitol riots than any other social-media platform.
  • Mark Zuckerberg told Congress it’s because Facebook has been cooperating with law enforcement.
  • On Thursday, he also downplayed Facebook’s role allowing misinformation and violence to spread.
  • See more stories on Insider’s business page.

Facebook CEO Mark Zuckerberg has a theory about why his company keeps showing up in legal documents surrounding the attempted insurrection on January 6: it’s just doing a really good job helping out with law enforcement’s investigations.

Last month, a report found Facebook was the most-widely referenced social-media platform in court documents used to charge 223 people with crimes in connection with Capitol attacks.

On Thursday, Congress hauled the CEOs of Facebook, Twitter, and Google-parent Alphabet in for a hearing to examine the role social media companies played in amplifying misinformation and allowing extremists to organize, and Zuckerberg was grilled about the report.

Rep. Paul Tonko, a Democrat from New York, asked Zuckerberg whether he still denied Facebook was a “significant megaphone for the lies that fueled the insurrection” amid “growing evidence” suggesting it was, including the charging documents.

“I think part of the reason why our services are very cited in the charging docs is because we worked closely with law enforcement to help identify the people who were there,” Zuckerberg said, adding that such “collaboration” shouldn’t “be seen as a negative reflection on our services.”

Zuckerberg and other Facebook executives have repeatedly downplayed the company’s role in the insurrection. COO Sheryl Sandberg said in mid-January the “Stop the Steal” rally, which immediately preceded the Capitol attacks, “were largely organized on platforms that don’t have our abilities to stop hate, and don’t have our standards, and don’t have our transparency.”

But numerous media and research reports have emerged since then showing that, despite its talk about cracking down, Facebook still allowed misinformation to spread widely and violence-promoting groups to gather.

A report from the research group Avaaz, released this week, found 267 violence-glorifying groups, with a combined audience of 32 million people, had “almost tripled their monthly interactions – from 97 million interactions in October 2019 to 277.9 million interactions in July 2020.”

Avaaz said 188 of those groups remained active despite “clear violations of Facebook’s policies,” and that even after contacting Facebook, the company still let 97 groups continue to use its platform.

Facebook executives have long known its groups-focused features have been a hotbed of extremism. The Wall Street Journal reported in January Facebook’s data scientists told the company 70% of its 100 most active “civic groups” were rife with hate speech, misinformation, bullying, and harassment.

“Our existing integrity systems,” they told executives, according to The Journal, “aren’t addressing these issues.”

Zuckerberg and others at Facebook, such as policy head Joel Kaplan, even killed or weakened projects aimed at stemming the flow of such content, The Journal previously reported.

Yet Zuckerberg this week continued to deny Facebook has a serious issue with how it moderates content.

“There was content on our services from some of [the insurrectionists],” he said. “I think that that was problematic, but by and large, I also think that by putting in place policies banning QAnon, banning militias, banning other conspiracy networks, we generally made our services inhospitable to a lot of these folks.”

So far, the evidence doesn’t appear to support Zuckerberg’s claims.

The Tech Transparency Project said it has been warning Facebook about the surge in far-right groups since last April, but continued to find “numerous instances of domestic extremists discussing weapons and tactics, coordinating their activities, and spreading calls to overthrow the government on Facebook, up to and including the mob attack on the Capitol.”

Read the original article on Business Insider

Mark Zuckerberg said content moderation requires ‘nuances’ that consider the intent behind a post, but also highlighted Facebook’s reliance on AI to do that job

facebook zuckerberg misinformation hearing
  • Facebook’s Mark Zuckerberg said the “nuances” of content matters when moderating posts.
  • The CEO also said Facebook relies on automation – more than 95% of hate speech is taken down by AI.
  • Zuckerberg was joined by the CEOs of Twitter and Google in a misinformation hearing on Thursday.
  • See more stories on Insider’s business page.

Facebook CEO Mark Zuckerberg said the nuances of counter-speech are what makes it difficult to moderate content online, in comments made during a misinformation hearing on Thursday.

By relying on machines to moderate posts, nuance can get lost, a point he made when Democratic congresswoman Doris Matsui of California asked Zuckerberg about anti-Asian hashtags regarding the pandemic that appeared on social media platforms. Specifically, the Facebook CEO said users that make counterarguments against hateful rhetoric sometimes use the same hashtags as people who use them maliciously.

“One of the nuances that Jack [Dorsey] highlighted that we certainly see as well in enforcing hate speech policies is that we need to be clear about when someone is saying something because they’re using it in a hateful way versus when they’re denouncing it,” Zuckerberg said.

That, in turn, prevents the company from relying on automated moderation to block those words or phrases.

But Zuckerberg also detailed Facebook’s reliance on artificial intelligence in weeding through content online when he was later asked by Democratic Rep. Tom O’Halleran about what the company has done to “increase reviewer capacity.”

“More than 95% of the hate speech that we take down is done by an AI and not by a person,” Zuckerberg said during the hearing. “And I think it’s 98 or 99% of the terrorist content that we take down is identified by an AI and not a person.”

Content moderation was a central topic of discussion during Thursday’s hearing on misinformation, which was scheduled to discuss the role that Facebook, Google, and Twitter play in how false information spreads online. Misinformation about the COVID-19 pandemic, the 2020 presidential election, and the January 6 US Capitol siege was specifically highlighted throughout the hearing.

The hours-long hearing covered a variety of topics. Tech regulation and issues relating to the industry have become largely politicized, and lawmakers from both sides of the aisle reverted to familiar talking points when grilling the executives.

Some Republicans questioned the CEOs over alleged conservative discrimination online, citing a New York Post article about President Joe Biden’s son that was published last year and blocked by Twitter. And Democratic lawmakers grilled the tech leaders over what they said was a lack of action taken to combat misinformation and hate speech online.

Read the original article on Business Insider

Mark Zuckerberg said policing bullying is hard when the content is ‘not clearly illegal’ – in 44 states, cyberbullying can bring criminal sanctions

mark zuckerberg facebook
Mark Zuckerberg at the 56th Munich Security Conference in February 2020.

  • US Rep. Fred Upton asked Mark Zuckerberg what Facebook was doing to stop bullying.
  • Zuckerberg said the site has trouble moderating that content it because it’s “not clearly illegal.”
  • 48 states have laws against online harassment and bullying.
  • See more stories on Insider’s business page.

Facebook CEO Mark Zuckerberg said it has been difficult for his social media network to police cyberbullying content, after a representative called him out for a lack of moderation on Facebook during a misinformation hearing on Thursday.

US Representative (R) Fred Upton made a reference to the Boulder Shooting on Monday, saying there was a lot of speculation the shooter had been bullied online. He asked Zuckerberg what Facebook was doing to stop bullying on its platform.

“It’s horrible and we need to fight it and we have policies that are against it, but it also is often the case that bullying content is not clearly illegal,” Zuckerberg said during the hearing.

48 states have laws against online harassment, which includes cyberbullying, according to data from a cyberbullying research site. 44 of the states also include criminal sanctions against online bullying and harassment, the research shows.

Read more: Facebook says it removed more than 1.3 billion fake accounts in the months surrounding the 2020 election

During the hearing, Zuckerberg presented several changes that could be made to internet legislation in the US, including increased transparency for platforms like Facebook, standards for addressing illegal content like cyberbullying on social media, as well as laws protecting smaller social media platforms from lawsuits and heavy regulations.

“When I was starting Facebook, if we had been hit with a lot of lawsuits around content, it might have been prohibitive for me getting started,” Zuckerberg said.

The purpose of Thursday’s hearing was to address the role of tech companies like Google, Facebook, and Twitter in the spread of misinformation – in particular false data on the coronavirus, the US election, and Capitol Siege.

The sites were identified as a primary source of information for insurrectionists leading up to the attack on the Capitol. Many people that stormed the Capitol organized on websites like Facebook in the weeks leading up to the siege.

Experts have also said that Facebook and Twitter should be held accountable for their hands-off approach on content moderation, as well as even potentially profiting off the spread of misinformation on the sites.

Read the original article on Business Insider

GOP Sen. Ron Johnson falsely claimed Greenland only recently froze and now admits he has ‘no idea’ about its history

ron johnson stimulus checks
Sen. Ron Johnson, R-Wis.

  • In an attempt to undermine climate science, GOP Sen. Ron Johnson falsely claimed that Greenland was named for its once-green landscapes.
  • Johnson admitted to The New York Times last week that he had “no idea” how Greenland got its name.
  • Johnson has rejected the science proving that climate change is overwhelmingly caused by human activity.
  • See more stories on Insider’s business page.

In an attempt to undermine climate science, Sen. Ron Johnson falsely claimed in 2010 that Greenland – a largely ice-covered island – was named for its once-green landscapes.

Johnson, a Wisconsin Republican, admitted to The New York Times last week that he had “no idea” how Greenland got its name.

“You know, there’s a reason Greenland was called Greenland,” Johnson told Madison news outlet WKOW-TV in 2010. “It was actually green at one point in time. And it’s been, you know, since, it’s a whole lot whiter now so we’ve experienced climate change throughout geologic time.”

In reality, Erik Thorvaldsson, a Viking settler also known as Erik the Red, gave Greenland a misleading name in the hopes of attracting Europeans to the island. The Danish territory has been covered in ice and glaciers for at least 2.5 million years.

“I could be wrong there, but that’s always been my assumption that, at some point in time, those early explorers saw green,” Johnson told The Times last week. “I have no idea.”

Some of those who deny the scientific consensus on climate change spread the myth that ice ages and warm periods between them prove that the global warming the Earth is currently experiencing is natural. Johnson has repeatedly rejected the science proving that climate change is overwhelmingly caused by human activity. He’s falsely claimed that global warming is caused by sunspots and that there’s nothing humans can do to reverse the phenomenon.

“If you take a look at geologic time, we’ve had huge climate swings,” Johnson told the Milwaukee Journal Sentinel in a 2010 interview. “I absolutely do not believe that the science of man-caused climate change is proven, not by any stretch of the imagination. I think it’s far more likely that it’s just sunspot activity or something just in the geologic eons of time where we have changes in the climate.”

He went on, “The Middle Ages was an extremely warm period in time too, and it wasn’t like there were tons of cars on the road.”

Johnson also claimed that attempting to reverse climate change is a “fool’s errand” that would wreck the economy.

“I don’t think we can do anything about controlling what the climate is,” he said.

Read the original article on Business Insider

As Clubhouse’s popularity skyrockets, some observers are raising questions about the spread of misinformation

Clubhouse audio app Reuters February 2021 fingers iPhone.JPG
Clubhouse launched as an invite-only app last March.

  • As Clubhouse downloads doubled in February, experts asked how it planned to moderate content.
  • Users have hosted rooms questioning coronavirus vaccines, the Holocaust, and other topics.
  • A year after its launch, the invite-only app has more than 10 million weekly users.
  • Visit the Business section of Insider for more stories.

In a recent Clubhouse discussion about COVID-19 vaccines, a woman digitally raised her hand, entered the conversation, and spoke at length about how the virus could be treated more effectively with herbal and natural remedies than with vaccines.

She told dozens of listeners: “A pharmaceutical company is an industry, a business, just like anything else and everyone else, who is devoted specifically and exclusively to making sure their shareholders have profits, quarter over quarter. It is not about your health. It is not about your wellness.”

Clubhouse, which launched as an invite-only app last March, has in recent weeks surged in popularity to become one of the world’s most-downloaded iPhone apps. As of March 1, it had been downloaded about 11.4 million times, according to App Annie, a mobile data tracker. That was up from just 3.5 million a month earlier.

The company said in late February that it had more than 10 million active users each week.

As its growth skyrocketed this year, some technologists and academics began asking questions about how it moderates conversations. Outsiders were wondering about bots and the spread of misinformation – the same types of questions that have long been asked about Facebook, Twitter, and other social networks.

While vaccine discussions on Clubhouse may simply go against company guidelines – along with those from the Centers for Disease Control and Prevention – other conversations were more incendiary.

In one high-profile instance, a Twitter user shared a screenshot of a Clubhouse room called: “Were 6 million Jews really killed?”After users reported that room, the company said on Twitter: “This has no place on Clubhouse. Actions have been taken. We unequivocally condemn Anti-Semitism and all other forms of racism and hate speech.”

But some observers questioned whether less inflammatory misinformation had slipped through the cracks.

“Thus far, the creators of the app have been less concerned with misinformation, and more so with the growing number of users on the platform,” said Heinrich Long, a privacy expert at Restore Privacy.

By design, Clubhouse encourages users to explore, and jump in and out of discussions. At any given moment, there are hundreds or thousands of conversations in many different languages, making moderation a daunting task.

The company’s been building a Trust & Safety team for the last year, growing its numbers alongside the platform. As of Saturday, it had two public job postings for that team on its website.

Clubhouse declined an interview request for this story, but a spokesperson sent a statement saying “racism, hate speech and abuse are prohibited on Clubhouse.” Such speech would violate the company’s guidelines and terms.

“The spreading or sharing of misinformation is strictly prohibited on Clubhouse. Clubhouse strongly encourages people to report any violations of our Terms of Service or Community Guidelines,” the spokesperson said via email.

They added: “If it is determined that a violation has taken place, Clubhouse may warn, suspend, or remove the user from the platform, based on the severity of the violation.”

Everything said on Clubhouse is recorded in the moment, according to the app’s guidelines. While discussions are live, the company keeps that encrypted recording. But after a conversation ends, the recording is destroyed. The only time where a conversation would be saved longer was when a listener flagged it to the company.

That moderation model is similar to the one used by Reddit, which largely relies on crowdsourced moderation, said Paul Bischoff, a privacy advocate at Comparitech. Unlike text-based Reddit, however, there won’t be a permanent record of every audio interaction on Clubhouse.

“That could lead to insulated echo chambers where misinformation is amplified without any outside viewpoints,” Bischoff said. “The live-ness could prevent people from being able to report bad behavior on the app, but it could also stem the spread of misinformation beyond the app.”

In the conversation about vaccines, for example, one user asked the woman touting herbal COVID-19 remedies if she could share her information, so listeners could reach out offline to learn more about why vaccines weren’t the best solution for the coronavirus.

There’s also a question of how bots or large groups of coordinated users could affect conversations on the app, said Sam Crowther, founder and chief executive at Kasada, a company that identifies bot activity.

Crowther said he’s already seen some chatter on bot-related message boards about how Clubhouse could be exploited.

“One of the underlying truths with internet businesses is that if you build it, they’ll make a bot to exploit it,” Crowther said, adding, “Removing fake accounts after they’re live is too late – companies need to take control and seize bad bots at registration.”

clubhouse app
The app encourages users to explore, and jump in and out of discussions.

So how can Clubhouse effectively moderate thousands of conversations between millions of users, many of whom are speaking local languages?

Like Facebook and other social networks, Clubhouse would do best with some form of artificial intelligence or voice pattern recognition system, said Stephen Hunnewell, executive director at the ADALA Project, a nonprofit that advocates for free speech around the world.

But, Hunnewell said, the real danger of audio conversations is that the content can’t be unheard.

Take the conversation about curing COVID-19 with herbal remedies. Dozens of people listening to that conversation already digested the information. Even if the conversation was flagged in real time, Clubhouse couldn’t guarantee that false information wasn’t spread further by those who had already heard it.

“The real danger is in the cross-pollination that seed has planted within whatever audience heard it and their further amplification,” Hunnewell said.

With a new platform like Clubhouse, which has scaled to millions of users in a short space of time, every new user counts, said Nir Kshetri, a professor at the University of North Carolina-Greensboro. That’s why a young company like Clubhouse could choose to prioritize growth at all costs.

Kshetri compared Clubhouse to bigger competitors, like Microsoft, which runs LinkedIn. That company’s been around for decades, and employs some 3,500 experts focused on cybercrime, artificial intelligence, and machine learning, he said.

For a small company like Clubhouse, it may take years to build similarly robust misinformation-tracking systems. In the end, it’s more a decision for the management, he added.

“The question of whether social network sites should play the role of gatekeeper for the news and information their users consume is more philosophical than technological,” Kshetri said.

Even now, some users are fighting back against what they see as misinformation on Clubhouse. In the chat about vaccines, where a woman spoke in favor of herbal remedies for COVID-19, a doctor was responding in real time to claims made in the room. A few times during the hourslong conversation, he popped in to express his opinions.

“I agree with some of what you’re saying, but I don’t agree with all of it,” he said, before finally exiting the room.

Read the original article on Business Insider

Twitter’s Birdwatch feature hopes to combat misinformation. Here’s how it works and how to sign up for it

people looking at smartphones texting hands
Twitter’s Birdwatch site allows approved users to fact-check tweets, and any user can view those fact-checks.

  • Twitter’s Birdwatch feature lets users flag tweets they believe are misleading or need additional context.¬†
  • The Birdwatch feature is in pilot mode and is only available to select users in the US.¬†
  • Birdwatch is Twitter’s latest method of cracking down on the rapid spread of misinformation¬†on the platform.¬†
  • Visit Insider’s Tech Reference library for more stories.

Twitter has introduced a series of features aimed to slow the spread of misinformation and disinformation on the platform. 

In January 2021, the social media company launched its pilot Birdwatch feature, which lets users flag and fact-check tweets they believe include inaccurate or deceptive information, as well as add notes to tweets that need additional context. 

The feature is only available to a select number of users within the US, but Twitter said it hopes to allow more people to participate soon. 

How to use WhatsApp’s fact-checking feature to research the validity of viral, forwarded messagesHow to report a fleet on Twitter if it violates the platform’s community guidelinesHow to soft block someone on Twitter to remove them as a followerMisinformation vs. disinformation: What to know about each form of false information, and how to spot them online

Read the original article on Business Insider

Trump’s fake inauguration on March 4 was QAnon’s latest vision that flopped. A new date is now being peddled to perpetuate the mind games.

qanon sign dc
Crowds gather outside the U.S. Capitol for the “Stop the Steal” rally on January 06, 2021 in Washington, DC.

  • QAnon followers believed that March 4 would see former President Donald Trump reinstated as president.
  • It was just the latest date in a long line of bizarre goalposts that continually shift.
  • Followers of the conspiracy theory are looking forward to future dates that will herald epic changes.
  • Visit the Business section of Insider for more stories.

“Don’t be disappointed,” wrote one subscriber on a popular QAnon Telegram channel late Thursday night. “The race is not run yet and I have reason to believe March 20 is also possible.”

Another believer posted a similarly optimistic message. “We still have 16 days,” they wrote. “Lots can happen between now and then!”

With the passing of March 4, a highly-anticipated date for the conspiracy group, followers remain characteristically delusional.

With the uneventful passage of yet another supposedly momentous date, QAnon fans spent Friday morning urging followers to look forward and “keep the faith.”

QAnon’s March 4 failure

When “the Storm’ – the promise of mass arrests and executions on Joe Biden’s Inauguration Day -amounted to nothing, followers of the QAnon conspiracy theory scrambled for a new date to imagine Trump’s fictional swearing-in ceremony.

March 4, like several fruitless dates that preceded it, was born out of a convoluted political fantasy.

QAnon adherents borrowed from the obscure US-based sovereign-citizen movement to suggest that Trump would return to power on March 4, 2021. Sovereign citizens “believe that they get to decide which laws to obey and which to ignore,” according to the Southern Poverty Law Center, a nonprofit organization that tracks extremism.¬†

The conspiracy-theory movement will continue to invent new dates to look forward to, or else their years of obsessional beliefs will all have been for naught, say far-right experts.

“Reality doesn’t really matter,” Nick Backovic, a contributing editor at fact-checking website Logically, where he researches misinformation and disinformation, told Insider. “Whether QAnon can survive another great disappointment, there’s no question – it can.”

The March 4 theory is rooted in a bizarre belief that argues all laws after the 14th Amendment, ratified in 1868, are illegitimate. 

The 20th Amendment, which moved Inauguration Day from March 4 to January 20, is viewed by sovereign citizens as invalid. 

Therefore, proponents of this conspiracy theory insisted that Trump would restore a republic that has been out of action for over 150 years on the day when former presidents were sworn-in. 

Travis View, a conspiracy theory expert and host of the QAnon Anonymous podcast, previously told Insider that it’s based on a “blind faith” that Trump can “fix everything.”

A series of no-shows

Before March 4, the QAnon follower’s calendar was marked with a string of dates that were once hailed as moments of reckoning that didn’t happen.

In 2017, the first “Q drop” – the cryptic messages from the anonymous “Q” figure whose guidance runs the movement – claimed that former Secretary of State Hillary Clinton would be arrested because of an unfounded allegation that she was involved in child sex trafficking. This, of course, never happened, but the QAnon conspiracy theory was born.

Devotees of the conspiracy theory then eagerly anticipated the Mueller report’s release in 2019, expecting its findings to lead to the arrest and possible execution of leading Democrats. Once again, this resulted in nothing more than disappointment for QAnon believers.

Then, in a bid to reconcile their belief that Trump would remain president, they believed January 6, which went on to be a deadly insurrection at the US Capitol, was a precursor to “The Storm” – a violent event that would result in the execution of child-abusive elites.

The goalpost was then moved to January 20, based on the claim that Trump would seize power prior to Biden taking his oath.

melania trump after polane inauguration
Outgoing U.S. President Donald Trump and First Lady Melania Trump exit Air Force One at the Palm Beach International Airport on the way to Mar-a-Lago Club on January 20, 2020 in West Palm Beach, Florida. Trump left Washington, DC on the last day of his administration before Joe Biden was sworn-in as the 46th president of the United States.

But Trump was not inaugurated again on January 20 and instead left Washington to move down to his Florida home. In the hours after Biden’s inauguration, some QAnon believers were left confused and crestfallen.¬†

Mental gymnastics ensued, with some QAnon influencers arguing that Biden’s inauguration had happened in a Hollywood studio and was therefore invalid; others claimed that Trump sent signals during his final pre-inauguration address indicating that he’d remain in office. These influencers again promoted to their followers the idea that somehow, their theory was not yet over.

“QAnon is dealing with a very difficult cognitive dissonance situation,” Michael Barkun, professor emeritus of political science at Syracuse University, told Insider.

Naturally, some believers become fed up with failures

Several top QAnon voices disavowed the March 4 conspiracy theory in the days leading up to Thursday. These influencers have likely been attempting to keep their followers on-board with the conspiracy theory despite its myriad disappointments, Backovic told Insider. 

A Wednesday post on a QAnon Telegram channel with nearly 200,000 subscribers called the plan “BS,” though the same page told their followers that the “new Republic” would begin on March 4.

Another top conspiracy theorist told their 71,000 subscribers on Wednesday morning that a “Q drop”¬† contained a hint that the March 4 conspiracy theory was a false flag. “March 4 is a Trap,” the post said.¬†

Screen Shot 2021 01 20 at 12.35.48 PM
QAnon supporters in a Telegram channel express confusion after Biden’s inauguration.

Whenever QAnon’s prophecies are proven wrong, the movement does lose some support, Backovic said.¬†

In the days after President Biden’s inauguration, many QAnon believers did express a desire to leave the movement, fed up with the lies they’d been told. Even Ron Watkins, once QAnon’s top source for voter-fraud misinformation, told his 134,000 Telegram subscribers in the afternoon of January 20, “Now we need to keep our chins up and go back to our lives as best we are able.”¬†

QAnon influencers calling the March 4 conspiracy a “false flag” also helps place blame on others in case things go awry like they did on January 6. Finding a scapegoat is a common tactic for extremists, according to Backovic.¬†

After the Capitol insurrection, QAnon supporters and other pro-Trump protesters – and several Republicans in Congress – spread the false claim that antifa, the anti-fascist movement, staged the deadly coup attempt on the Capitol.

FBI Director Christopher Wray said when he testified before Congress on Tuesday, no evidence antifa was involved in the riot.

In addition to focusing on specific dates, QAnon has evolved and adapted to include other conspiracy theories and enter more conventional spaces. 

Last spring, the movement pivoted to focus on ending human trafficking, making “Save the Children” its new battle cry. QAnon leveraged on mainstream social media, including Instagram, where lifestyle influencers spread it.¬†

Then, last fall, QAnon extremists joined with other right-wing groups to protest Biden’s election win as part of the Stop the Steal movement, which caused the Capitol insurrection.

security DC
National Guard keep watch on the Capitol, Thursday, March 4, 2021, on Capitol Hill in Washington.

With nothing happening on March 4, believers look forward (again)

The latest disappointment has already resulted in new dates being introduced with increasingly desperate explanations.

Some QAnon influencers have suggested that March 20 is when Trump will seize control, misinterpreting the Presidential Transition Enhancement Act of 2019, which streamlines the presidential transition by providing certain services to the previous administration 60 days after the inauguration.

¬†The claim, first made on a popular QAnon Telegram channel, appeared to be making ground with supporters offline, too. A QAnon supporter interviewed by The Washington Post’s Dave Weigel said he believes Trump remains in command of the military and will be inaugurated on the 20th.

But core followers of the conspiracy theory are reluctant to throw all their weight behind a particular date.

In another Telegram message board for QAnon believers, one post encouraged people to remain open-minded about Q’s plan. “Dates for late March, April, May, and more dates in the fall have been tossed out there,” the post said. “While we can speculate and hope, no specific dates have been landed on‚Ķ don’t get caught up in the dates, watch what’s happening.”

For those tempered by repeated disappointment, some are simply set on a resounding victory for Trump in 2024.

“Whether it’s some date in March or whether ultimately it will be a second Trump term after an election in 2024,” Barkun told Insider. “There will be some further set of explanations and a further set of dates.”

And the cycle continues.

Read the original article on Business Insider