Republican Rep. Devin Nunes was briefly locked out of his Twitter account on Tuesday evening after he failed to get past the company’s anti-spam filters, the company said.
“Our automated systems took enforcement action on the account in error and it has since been reversed. The enforcement action was taken as a result of the account’s failure to complete an anti-spam challenge that we regularly deploy across the service,” a Twitter spokesperson told Insider.
Twitter, like other websites, uses reCAPTCHAs – puzzles that require users to click on certain images to prove they’re humans. According to Twitter’s statement, Nunes was unable to successfully complete a reCAPTCHA, prompting Twitter’s systems to block access to his account.
It is unclear whether it was Nunes or a staffer for Nunes was responsible for the reCAPTCHA fail. A spokesperson for Nunes did not immediately respond to a request for comment.
Twitter users were quick to mock Nunes’ over the suspension given his antagonistic history with the social media platform.
“this can’t be real,” tweeted the account @DevinCow, while the account @DevinAlt mocked Nunes’ inability to solve the reCAPTCHA puzzle.
In March 2019, Nunes sued Twitter for $250 million over tweets posted by the two anonymous parody accounts, as well as a real account for Republican strategist Liz Mair. Nunes had argued that Twitter was liable for the tweets, which he said ruined his reputation and contributed to him winning a 2018 election by a “much narrower margin” than in previous years.
In June 2020, the courts tossed out his case, ruling that the social media network cannot be held liable for unflattering tweets made by its users.
YouTube said Tuesday that it has “removed new content” from President Donald Trump’s official channel and banned him from posting new videos for a “minimum” of one week for violating its policies.
YouTube also gave Trump’s channel its first “strike,” and is “indefinitely disabling” comments over “safety concerns.”
YouTube’s actions come days after Facebook and Twitter banned Trump from their platforms entirely, and amid pushback from Google’s newly formed union, which slammed the company’s response to recent violence as “lackluster.”
YouTube has suspended President Donald Trump’s account for at least one week after removing a video that the company said incited violence.
The offending video was uploaded Tuesday and violated YouTube’s policies on inciting violence, a spokesperson said, although the company did not share details of the video’s contents.
YouTube said it had issued the account a single strike, preventing it from uploading new videos for seven days, but said that timeframe could be extended.
The company said it has also disabled comments under videos on the channel indefinitely.
“After careful review, and in light of concerns about the ongoing potential for violence, we removed new content uploaded to the Donald J. Trump channel and issued a strike for violating our policies for inciting violence,” a spokesperson told Business Insider.
“As a result, in accordance with our long-standing strikes system, the channel is now prevented from uploading new videos or livestreams for a minimum of seven days-which may be extended. We are also indefinitely disabling comments under videos on the channel, we’ve taken similar actions in the past for other cases involving safety concerns.”
Although Trump’s account is suspended, the channel is still active along with previously uploaded videos, some of which falsely claim that President-elect Joe Biden did not win the election.
A spokesperson said that a second strike on the channel will lead to a two-week ban, and three strikes means permanent suspension.
YouTube is the last major internet platform to suspend Trump’s account after pro-Trump insurrectionists attempted a coup at Capitol Hill last week. Facebook suspended Trump’s account for at least two weeks, while Twitter banned him indefinitely.
While YouTube removed a video message posted by Trump last week, in which he spoke to the rioters, it stopped short of suspending his account entirely. Instead, the Google-owned service introduced a new strike policy.
The decision has drawn criticism from within and outside of Google. The recently-formed Alphabet Workers Union slammed YouTube for its “lackluster” response to the siege on the Capitol, while civil rights groups and celebrity figures including Sacha Baron Cohen publicly called for the YouTube account to be suspended.
Google was swifter to pull Parler, the social media app that’s popular with Trump supporters, from its Play Store. Google said the app did not have sufficient moderation policies in place to curb content that could also incite violence.
Are you a Google employee with more to share? You can contact the reporter Hugh Langley securely using the encrypted messaging app Signal (+1-628-228-1836) or encrypted email (email@example.com). Reach out using a nonwork device.
Facebook’s safety team determined earlier this year that Bajrang Dal, a religious extremist group in India, was likely a “dangerous organization” that should be banned from the platform under its rules, The Wall Street Journal reported Sunday.
But, The Journal reported, Facebook became concerned about banning the group after its security team warned that doing so could lead to attacks against Facebook’s staff.
Facebook’s inconsistency in enforcing its rules in India has also been motivated by fears that backlash from India’s nationalist ruling party could hurt business, The Wall Street Journal previously reported.
The social media company has increasingly come under fire over its struggle to effectively and consistently police its platform — especially outside of the US, where users have leveraged its platform to facilitate ethnic violence, undermine democratic processes, and crack down on free speech.
Facebook determined that a religious extremist group in India likely should be banned from the platform for promoting violence, but it has yet to take action because of concerns over its staff’s safety and political repercussions that could hurt its business, The Wall Street Journal reported Sunday.
Bajrang Dal, a militant Hindu nationalist group, has physically assaulted Muslims and Christians, and one of its leaders recently threatened violence against Hindus who attend church on Christmas.
Earlier this year, Facebook’s safety team determined that Bajrang Dal likely was a “dangerous organization” and, per its policies against such groups, should be removed from the platform entirely, according to The Journal.
But Facebook hesitated to enforce those rules after its security team concluded that doing so could hurt its business in India as well as potentially trigger physical attacks against its employees or facilities, The Journal reported.
“We ban individuals or entities after following a careful, rigorous, and multi-disciplinary process. We enforce our Dangerous Organizations and Individuals policy globally without regard to political position or party affiliation,” a Facebook company spokesperson told Business Insider.
According to the Journal, Facebook refused to say whether it ultimately decided to designate Bajrang Dal as not dangerous.
This isn’t the first time Facebook has faced criticism over how it has – or hasn’t – enforced its rules, even within India or even within India in the past few months.
The Journal reported in August that Facebook refused to apply its hate speech policies to T. Raja Singh, a politician from India’s nationalist ruling BJP party, despite his calls to shoot Muslim immigrants and threats to destroy mosques.
Facebook employees had concluded that, in addition to violating the company’s policies, Singh’s rhetoric in the real world was dangerous enough to merit kicking him off the platform entirely. However, Facebook’s top public policy executive in India overruled them, arguing that the political repercussions could hurt the company’s business (India is its largest and fastest-growing market globally by number of users).
The internal tension over Bajrang Dal reflects the frequent challenges Facebook faces when its profits come into conflict with local governments and laws, rules the company has established for its platform, and CEO Mark Zuckerberg’s pledges to uphold free speech and democratic processes.
In August, Facebook took the rare step of legal action against Thailand’s government over its demand that the company block users within the country from accessing a group critical of its king, though it’s complying with the government’s request while the case proceeds in court.
But BuzzFeed News reported in August that Facebook ignored or failed to quickly address dozens of incidents of political misinformation and efforts to undermine democracy around the world, particularly in smaller and non-Western countries.
And even as Zuckerberg has defended Facebook’s exemption of President Donald Trump and other politicians from its hate speech and fact-checking policies, human rights activists around the world have slammed the social media giant for refusing to protect the free speech of those not in power.