Facebook moderators, tasked with watching horrific content, are demanding an end to NDAs that promote a ‘culture of fear and excessive secrecy’

mark zuckerberg facebook
Facebook CEO Mark Zuckerberg in Washington DC on Oct. 23, 2019.

  • Facebook content moderators are calling for an end to NDAs, which prohibit them from talking about work.
  • Moderators are contractors tasked with sifting through violent content, like suicide and child abuse.
  • The moderators are also asking for better mental health support and full-time employee pay.
  • See more stories on Insider’s business page.

Content moderators for Facebook are urging the company to improve benefits and update non-disclosure agreements that they say promote” a culture of fear and excessive secrecy.”

In a letter addressed to Facebook CEO Mark Zuckerberg and COO Sheryl Sandberg – as well as executives of contracting firms Covalen and Accenture – a group of moderators said “content moderation is at the core of Facebook’s business model. It is crucial to the health and safety of the public square. And yet the company treats us unfairly and our work is unsafe.”

Their demands are three-fold:

  • Facebook must change the NDAs that prohibit them from speaking out about working conditions.
  • The company must provide improved mental health support, with better access to clinical psychiatrists and psychologists. As the letter reads: “it is not that the content can “sometimes be hard”, as Facebook describes, the content is psychologically harmful. Imagine watching hours of violent content or children abuse online as part of your day-to-day work. You cannot be left unscathed.”
  • Facebook must make all content moderators full-time employees and provide them with the pay and benefits that in-house workers are afforded.

Facebook did not immediately respond to Insider’s request to comment. A company spokesperson told The Verge that moderators do have access to mental health care “when working with challenging content,” and moderators in Ireland specifically have “24/7 on-site support.”

Covalen and Accenture did not immediately respond to requests for comment.

Friday’s letter comes as Facebook’s content moderators have long decried the company’s treatment of them, even as they’re tasked with sifting through horrific content on its platforms. That content can include violent physical and sexual abuse, suicide, and other graphic visuals.

A moderator employed through Covalen, a Facebook contractor in Ireland, told the Irish Parliament in May that they’re offered “wellness coaches” to cope, but it’s not enough.

“These people mean well, but they’re not doctors,” the moderator, 26-year-old Isabella Plunkett, said in May. “They suggest karaoke or painting but you don’t always feel like singing, frankly, after you’ve seen someone battered to bits.”

Facebook CEO Mark Zuckerberg said in a company-wide meeting in 2019 that some of the content moderators’ stories around coping with the work were “a little dramatic.”

Read the original article on Business Insider

Facebook employees are reportedly begging the company to do more about racist comments attacking English soccer players after the team lost the Euro Cup final

english soccer player bukayo saka is seen on a field
Bukayo Saka on July 11, 2021.

  • Employees are frustrated with Facebook as racist comments fill Black soccer players’ accounts.
  • Employees said the company needs to act faster and “can’t be seen as complicit in this.”
  • One employee said they’ve reported so many comments that they are now disabled from doing so.
  • See more stories on Insider’s business page.

Facebook employees are begging the company to take more aggressive action against racist comments on the accounts of English soccer players Bukayo Saka and Marcus Rashford, according to journalist Ryan Mac.

Mac said on Twitter that Facebook employees told him they have been alerting the company of racist comments for more than 12 hours now. Many of the comments include monkey emojis, according to Mac, and an employee inquired if it’s “possible to remove known racist emojis from comments.”

The company told Insider that a banana or monkey emoji on its own doesn’t necessarily violate the rules, but the context is considered when Facebook reviews content.

The comments also appear to be coming from anonymous and spam accounts that intend to abuse people online, an employee said.

Another employee said they had reported so many racist comments that their personal Instagram account has been disabled from reporting anymore.

“We MUST act faster here,” one employee said, according to Mac.

A Facebook spokesperson told Insider that “we quickly removed comments and accounts directing abuse at England’s footballers last night and we’ll continue to take action against those that break our rules.”

The spokesperson also said, “no one thing will fix this challenge overnight, but we’re committed to keeping our community safe from abuse.”

Other employees said they are confused as to why Facebook didn’t anticipate such racist remarks leading up to the sporting event since they have been a common occurrence throughout the season, according to Mac.

“We get this stream of utter bile every match, and it’s even worse when someone black misses… We really can’t be seen as complicit in this,” one employee said in an internal forum, Mac said.

Insider viewed the comments sections under both Saka and Rashford’s Instagram posts and viewed various racist comments, such as one user who referred to Saka and another as “baboooooons.”

Many users called for the racist hate to stop and expressed support for both players, as well as advised others to report racist comments that they see.

Facebook uses both human moderators and AI to sift through posts. Content moderation has long been Facebook’s Achilles Heel, and public pressure for the company to more heavily police what users post has picked up steam in the last year specifically.

Facebook’s content moderation policies have typically taken center stage in political discourse, especially in regard to former President Donald Trump’s posts, the 2020 presidential election, and the right’s belief that Big Tech censors conservative users.

Read the original article on Business Insider

Twitter accidentally locked GOP Rep. Devin Nunes out of his account because he failed its anti-spam filter

Devin Nunes
Rep. Devin Nunes, R-Calif, the ranking member of the House Intelligence Committee, pauses at the end of testimony by witnesses Jennifer Williams, an aide to Vice President Mike Pence, and National Security Council aide Lt. Col. Alexander Vindman, on Capitol Hill in Washington, Tuesday, Nov. 19, 2019,

Republican Rep. Devin Nunes was briefly locked out of his Twitter account on Tuesday evening after he failed to get past the company’s anti-spam filters, the company said.

“Our automated systems took enforcement action on the account in error and it has since been reversed. The enforcement action was taken as a result of the account’s failure to complete an anti-spam challenge that we regularly deploy across the service,” a Twitter spokesperson told Insider.

Twitter, like other websites, uses reCAPTCHAs – puzzles that require users to click on certain images to prove they’re humans. According to Twitter’s statement, Nunes was unable to successfully complete a reCAPTCHA, prompting Twitter’s systems to block access to his account.

It is unclear whether it was Nunes or a staffer for Nunes was responsible for the reCAPTCHA fail. A spokesperson for Nunes did not immediately respond to a request for comment.

Twitter users were quick to mock Nunes’ over the suspension given his antagonistic history with the social media platform.

“this can’t be real,” tweeted the account @DevinCow, while the account @DevinAlt mocked Nunes’ inability to solve the reCAPTCHA puzzle.


In March 2019, Nunes sued Twitter for $250 million over tweets posted by the two anonymous parody accounts, as well as a real account for Republican strategist Liz Mair. Nunes had argued that Twitter was liable for the tweets, which he said ruined his reputation and contributed to him winning a 2018 election by a “much narrower margin” than in previous years.

In June 2020, the courts tossed out his case, ruling that the social media network cannot be held liable for unflattering tweets made by its users.

Read the original article on Business Insider

YouTube removes Trump content and bans him from uploading new videos for a minimum of 7 days, citing ‘ongoing potential for violence’

Sundar Pichai Donald Trump 2x1
Sundar Pichai Donald Trump

  • YouTube said Tuesday that it has “removed new content” from President Donald Trump’s official channel and banned him from posting new videos for a “minimum” of one week for violating its policies.
  • YouTube also gave Trump’s channel its first “strike,” and is “indefinitely disabling” comments over “safety concerns.”
  • YouTube’s actions come days after Facebook and Twitter banned Trump from their platforms entirely, and amid pushback from Google’s newly formed union, which slammed the company’s response to recent violence as “lackluster.”
  • Visit Business Insider’s homepage for more stories.

YouTube has suspended President Donald Trump’s account for at least one week after removing a video that the company said incited violence.

The offending video was uploaded Tuesday and violated YouTube’s policies on inciting violence, a spokesperson said, although the company did not share details of the video’s contents.

YouTube said it had issued the account a single strike, preventing it from uploading new videos for seven days, but said that timeframe could be extended.

The company said it has also disabled comments under videos on the channel indefinitely.

“After careful review, and in light of concerns about the ongoing potential for violence, we removed new content uploaded to the Donald J. Trump channel and issued a strike for violating our policies for inciting violence,” a spokesperson told Business Insider.

“As a result, in accordance with our long-standing strikes system, the channel is now prevented from uploading new videos or livestreams for a minimum of seven days-which may be extended. We are also indefinitely disabling comments under videos on the channel, we’ve taken similar actions in the past for other cases involving safety concerns.”

Although Trump’s account is suspended, the channel is still active along with previously uploaded videos, some of which falsely claim that President-elect Joe Biden did not win the election.

A spokesperson said that a second strike on the channel will lead to a two-week ban, and three strikes means permanent suspension.

YouTube is the last major internet platform to suspend Trump’s account after pro-Trump insurrectionists attempted a coup at Capitol Hill last week. Facebook suspended Trump’s account for at least two weeks, while Twitter banned him indefinitely.

While YouTube removed a video message posted by Trump last week, in which he spoke to the rioters, it stopped short of suspending his account entirely. Instead, the Google-owned service introduced a new strike policy

The decision has drawn criticism from within and outside of Google. The recently-formed Alphabet Workers Union slammed YouTube for its “lackluster” response to the siege on the Capitol, while civil rights groups and celebrity figures including Sacha Baron Cohen publicly called for the YouTube account to be suspended.

Google was swifter to pull Parler, the social media app that’s popular with Trump supporters, from its Play Store. Google said the app did not have sufficient moderation policies in place to curb content that could also incite violence.

Are you a Google employee with more to share? You can contact the reporter Hugh Langley securely using the encrypted messaging app Signal (+1-628-228-1836) or encrypted email (hslangley@protonmail.com). Reach out using a nonwork device.

Read the original article on Business Insider

Facebook reportedly hasn’t banned a violent religious extremist group in India because it fears for its business prospects and staff’s safety

Mark Zuckerberg Narendra Modi.JPG
Indian Prime Minister Narendra Modi (L) and Facebook CEO Mark Zuckerberg at Facebook’s headquarters in Menlo Park, California September 27, 2015.

  • Facebook’s safety team determined earlier this year that Bajrang Dal, a religious extremist group in India, was likely a “dangerous organization” that should be banned from the platform under its rules, The Wall Street Journal reported Sunday.
  • But, The Journal reported, Facebook became concerned about banning the group after its security team warned that doing so could lead to attacks against Facebook’s staff.
  • Facebook’s inconsistency in enforcing its rules in India has also been motivated by fears that backlash from India’s nationalist ruling party could hurt business, The Wall Street Journal previously reported.
  • The social media company has increasingly come under fire over its struggle to effectively and consistently police its platform — especially outside of the US, where users have leveraged its platform to facilitate ethnic violence, undermine democratic processes, and crack down on free speech.
  • Visit Business Insider’s homepage for more stories.

Facebook determined that a religious extremist group in India likely should be banned from the platform for promoting violence, but it has yet to take action because of concerns over its staff’s safety and political repercussions that could hurt its business, The Wall Street Journal reported Sunday.

Bajrang Dal, a militant Hindu nationalist group, has physically assaulted Muslims and Christians, and one of its leaders recently threatened violence against Hindus who attend church on Christmas.

Earlier this year, Facebook’s safety team determined that Bajrang Dal likely was a “dangerous organization” and, per its policies against such groups, should be removed from the platform entirely, according to The Journal.

But Facebook hesitated to enforce those rules after its security team concluded that doing so could hurt its business in India as well as potentially trigger physical attacks against its employees or facilities, The Journal reported.

“We ban individuals or entities after following a careful, rigorous, and multi-disciplinary process. We enforce our Dangerous Organizations and Individuals policy globally without regard to political position or party affiliation,” a Facebook company spokesperson told Business Insider.

According to the Journal, Facebook refused to say whether it ultimately decided to designate Bajrang Dal as not dangerous.

This isn’t the first time Facebook has faced criticism over how it has – or hasn’t – enforced its rules, even within India or even within India in the past few months.

The Journal reported in August that Facebook refused to apply its hate speech policies to T. Raja Singh, a politician from India’s nationalist ruling BJP party, despite his calls to shoot Muslim immigrants and threats to destroy mosques.

Facebook employees had concluded that, in addition to violating the company’s policies, Singh’s rhetoric in the real world was dangerous enough to merit kicking him off the platform entirely. However, Facebook’s top public policy executive in India overruled them, arguing that the political repercussions could hurt the company’s business (India is its largest and fastest-growing market globally by number of users).

The internal tension over Bajrang Dal reflects the frequent challenges Facebook faces when its profits come into conflict with local governments and laws, rules the company has established for its platform, and CEO Mark Zuckerberg’s pledges to uphold free speech and democratic processes.

In August, Facebook took the rare step of legal action against Thailand’s government over its demand that the company block users within the country from accessing a group critical of its king, though it’s complying with the government’s request while the case proceeds in court.

But BuzzFeed News reported in August that Facebook ignored or failed to quickly address dozens of incidents of political misinformation and efforts to undermine democracy around the world, particularly in smaller and non-Western countries.

And even as Zuckerberg has defended Facebook’s exemption of President Donald Trump and other politicians from its hate speech and fact-checking policies, human rights activists around the world have slammed the social media giant for refusing to protect the free speech of those not in power.

Read the original article on Business Insider