TikTok’s algorithm shows anti-vaccine videos to children as young as 9, researchers say

FILE PHOTO: A 3-D printed figures are seen in front of displayed Tik Tok logo in this picture illustration taken November 7, 2019. REUTERS/Dado Ruvic/Illustration/File Photo
  • Kids are vulnerable to COVID misinformation on TikTok within minutes of signing up, a study shows.

  • Though TikTok prohibits users under 13, younger children can easily lie about their age to sign up.
  • Social media companies continue to face public criticism about their effects on young users.

At this point, it’s no secret that social media algorithms unintentionally help peddle COVID misinformation to millions of users. The more pressing problem is who that content is directed toward.

The popular social media app TikTok is feeding misinformation content to young children – even within minutes of signing up. False information was targeted toward children as young as nine, even if the young users did not follow nor search for that content.

According to a report from media rating firm NewsGuard, researchers found that COVID-related misinformation reached eight of the study’s nine child participants within the first 35 minutes on the platform, with two-thirds of the participants saw incorrect information specific to the COVID vaccines. This included content relating to unsubstantiated claims about COVID and the vaccine and homeopathic remedies for COVID.

“TikTok’s failure to stop the spread of dangerous health misinformation on their app is unsustainable bordering on dangerous,” Alex Cadier, the UK managing editor for NewsGuard who co-authored the report, told the Guardian. “Despite claims of taking action against misinformation, the app still allows anti-vaccine content and health hoaxes to spread relatively unimpeded.”

NewsGuard conducted the study in August and September, asking children ages nine to 17 from different cultural backgrounds to create accounts on TikTok. Though the platform restricts full access to the app for users younger than 13, the three youngest users were able to create accounts with no outside help. As of March 2021, a quarter of TikTok’s 130 million active monthly users in the US are between 10 and 19, according to Statista.

“TikTok is very bad at removing videos with misinformation, and these videos with vaccine misinformation stay for months and months on the platform,” University of Illinois School of Public Health Epidemiologist Katrine Wallace, who battles misinformation on Tik Tok, told Insider. “The more viral these videos get, the more eyes will see them, and unfortunately some will be children, due to the nature of the algorithms.”

TikTok’s community guidelines prohibits “false or misleading” content relating to COVID-19 and its vaccines, and the company employs teams that work to identify and remove misinformation, evaluating all COVID-related content on a case-by-case basis.

The app also said that it pushes for an “age-appropriate experience,” discouraging and removing accounts created by underage users and restricting LIVE and Direct messaging features for younger teens. Douyin, the Chinese version of TikTok, announced in September it was capping the amount of time users under 14 could use the app to 40 minutes per day.

TikTok didn’t respond to a request for comment on the NewsGuard report.

Besides TikTok, other platforms like Facebook, Instagram, and Twitter have come under fire in recent months as increased transparency from the companies revealed more about social media’s effects on society, particularly on younger generations. This week, a Facebook whistleblower helped shed light on the ways its platforms psychologically harm teenage users. Meanwhile, high-profile influencers on social media continue to spread COVID misinformation, ramping up the amount of harmful content directed at younger viewers.

Read the original article on Business Insider

Google and YouTube say they will cut off climate-change deniers from being able to monetize their content and display ads

Sundar Pichai wears a grey jacket over a white t-shirt and smiles on stage.
Google CEO Sundar Pichai.

  • Google plans to intervene in content that promotes lies about climate change.
  • YouTube previously banned all anti-vaccine content, despite historically avoiding content moderation.
  • The policy will affect advertisers, publishers, and YouTube creators.

Google is pulling the plug on climate deniers on its platform, banning content that contradicts well-established research from the scientific community, the company announced on Thursday.

The tech giant is taking a two-pronged approach, applying to advertisers and publishing partners in Google-served ads that try to promote climate change misinformation on pages and videos, as well YouTube Partner Program creators who try to monetize their climate change misinformation videos, according to a company blog post.

The new rule specifically targets claims that climate change is a “hoax or a scam”, claims that deny long-term environmental trends, and claims ignoring significant factors to climate change, like greenhouse gas emissions or humanity’s contributions to climate change. Google will continue to allow ads and monetization on climate-related topics, such as informed debates on climate change and verifiable research.

“We’ll look carefully at the context in which claims are made, differentiating between content that states a false claim as fact, versus content that reports on or discusses that claim,” the company said in the statement.

This follows a similar major move last week from the Google-owned YouTube, which announced it would ban all anti-vaccination content on its site beyond that which dealt with COVID-19. YouTube previously banned misinformation about COVID vaccines last October. Social media companies have generally tried to take a hands-off approach when it comes to content moderation, but have since taken steps to rein in misinformation across platforms.

Google, which is the largest digital-ad seller, has been criticized by Congress and climate change activists for allowing companies and climate-denying interest groups to buy search ads. Inaccurate, monetized climate change videos on YouTube received over 21 million views according to research from nonprofit organization Avaaz in 2020, Bloomberg first reported.

Google consulted with experts from the United Nations Intergovernmental Panel on Climate Change on the new monetization policy. The IPCC published its sixth assessment on the state of climate change earlier in August, warning of “irreversible” climate-related changes.

The company will begin enforcing the new changes in November.

The new policy change also comes amid several features Google released this week around sustainability, including Google Maps’ eco routes, aimed at reaching a “billion sustainable actions,” Google’s Chief Sustainability Officer Kate Brandt said.

Read the original article on Business Insider

Whistleblower says Facebook enabled misinformation by eliminating safeguards too soon after the 2020 election, helping to fuel the Capitol riot, report says

Facebook campus
  • Facebook is mounting a defense against new allegations from a whistleblower, the New York Times reported.
  • The whistleblower says the company turned off crucial safety measures too soon after the 2020 US elections, like limits on live video.
  • These actions caused misinformation to spread quickly and gave rioters easy ways to plan the insurrection, the whistleblower is expected to reveal.
  • See more stories on Insider’s business page.

A former employee is accusing Facebook of relaxing election-related safeguards too soon, a move the whistleblower says contributed to the spread of misinformation and the lead-up of the deadly Capitol riot.

The New York Times reported the whistleblower, whose identity is not yet known, is planning to reveal the accusation on Sunday.

Facebook, in the meantime, is in preparation mode, according to an internal memo from the social media company obtained by the Times.

Facebook declined to comment.

The whistleblower says the company turned off some crucial safety measures like limits on live video too quickly following the 2020 presidential election, the Times reported. Actions like that caused misinformation to spread quickly and gave supporters of former President Donald Trump easy ways to plan the insurrection and tout false claims about voter fraud.

Facebook, which in the memo called the accusations “misleading,” is mounting a defense.

“Social media has had a big impact on society in recent years, and Facebook is often a place where much of this debate plays out,” Facebook VP Nick Clegg wrote in the 1,500-word memo seen by the newspaper. “What evidence there is simply does not support the idea that Facebook, or social media more generally, is the primary cause of polarization.”

The whistleblower, who’s expected to reveal their identity on Sunday in a “60 Minutes” interview, has been feeding the Wall Street Journal staggering information recently, including documents that revealed Facebook knew its apps and services could lead to body image issues for young girls. Facebook did not do much to reverse the adverse effects, the leaker has alleged.

The whistleblower also provided documents on a litany of other issues, including evidence alleging the company deceives the public and its investors about its campaigns to curtail or end hate, violence, and misinformation. Facebook was also almost booted by Apple’s app store because human traffickers were actively using the site, the WSJ reported. Facebook said the Wall Street Journal reporting was “riddled with flaws.”

Clegg’s memo about the latest whistleblower claims was distributed to Facebook employees on Friday ahead of the expected CBS interview, the Times reported.

“We will continue to face scrutiny – some of it fair and some of it unfair,” Clegg wrote in the memo. “But we should also continue to hold our heads up high.”

Facebook declined to provide a copy of the memo to Insider. But it’s available to read at the New York Times, which published it in full.

Read the original article on Business Insider

Activists and healthcare professionals are trying to stomp out anti-vaxx info online – but social media algorithms are working against them

covid misinfo hed
  • Social media algorithms aren’t adequately moderating platforms for misinformation, an expert says.
  • COVID-19 misinformation communities are taking advantage of engagement-driven algorithms to spread their messages on social media.
  • Cracking down on bad content has become close to an “impossible task” for health advocates and social media moderators.
  • See more stories on Insider’s business page.

We might not immediately see it, but we are surrounded by COVID-19 lies on social media. Anti-vaxxers sharing ill-informed opinions of vaccines. Dark net vendors trying to sell fake COVID vaccine cards. Fringe doctors telling people to inject themselves with bleach or horse dewormer. But that’s because something actively fights against our best health interests.

TikTok, Facebook, Instagram, and Twitter are just some platforms that have tried to crack down on COVID falsities – but the companies may be locked in a futile battle with their own algorithms.

The opaque mechanisms are making misinformation content less transparent to global health advocates. At the same time, they keep informative content away from their intended audiences, primarily those straggling on the border of COVID-19 understanding and vaccine hesitancy.

Members of Team Halo, a UN-backed campaign that helps amplify the voices of credible health professionals on social media platforms, have struggled with tackling misinformation online.

“The algorithm-driven platforms are very good at separating audiences,” Dr. Katrine Wallace, an epidemiologist at the University of Illinois School of Public Health, and member of Team Halo told Insider. “I don’t see [misinformation content] as much, but that’s good and bad… we know the demand is still there, it’s probably just not as public.”

Wallace, who publishes content correcting coronavirus misinformation videos on TikTok and Instagram, said health experts rely on secondhand reports from other users that point out misinformation because algorithms stop them from seeing it. Without those reports, misinformation is left unchecked to metastasize, even by the social media companies themselves.

Wallace and several other Team Halo health professionals, whose identities and medical credentials were verified by Insider, said they have had their own reputable, data-supported content suspended and flagged for review by the platforms. When the content is cleared by platform moderators, it receives less page views and exposure, Wallace said. So while algorithms target reputable COVID-19 content, they also leave some misinformation content alone.

Less than 1 in 20 false posts were removed across Facebook, Twitter, Instagram, and Youtube, even after users had reported the content, according to a 2020 report from the Center for Countering Digital Hate (CCDH).

“Much debate about misinformation on social media is about automated algorithms and detection,” the report says. “Even when companies are handed misinformation on a silver platter, they fail to act.”

Health professionals warned companies like Google, Twitter, and Facebook that anti-vaxxers were “weaponizing” their platforms to spread bad information. One NYU study showed misinformation on Facebook got six times more engagement than factual information. Instagram recommendation algorithms also pushed anti-vaxx posts, per a CCDH report.

Some platforms’ proposed remedies have been criticized as “band-aid” solutions, reviewing and removing users on a case-by-case basis, while others just can’t keep up with the rapid spread of bad content.

“People like Team Halo are starting to realize countering misinformation in ‘hand-to-hand combat’ is something that you could spend an infinite amount of time doing and never succeed,” Imran Ahmed, the CEO of CCDH and expert on social media, said in an interview with Insider. “It’s an impossible task.”

Wallace says she has been working to address misinformation on the virus for 18 months.

“It turned out to be a lot more of an arduous task than I had anticipated,” she said.

Social media giants have been secretive about how their algorithms work, and politicians, scientists, and activists are calling on social media companies to take more action against misinformation.

Facebook, Instagram, Twitter and TikTok did not respond to a request for comment for this article.

Insider has previously reached out to, Facebook, Instagram, Twitter, and TikTok about their content moderation processes and how algorithms use keywords to spot possible COVID misinformation, but they did not elaborate about the specific process of detection. These companies do employ a mix of algorithms and human reviewers to moderate their platform for potentially harmful content.

Read the original article on Business Insider

Facebook fires back at damning Wall Street Journal reports that accuse the company of being ‘riddled with flaws’

A person looks at. a smart phone with a Facebook logo displayed in the background
  • The Wall Street Journal published a series of reports finding Facebook to be “riddled with flaws.”
  • The series found the company turns a blind eye to its impact on everything from young girls using Instagram to human trafficking.
  • Facebook just issued a statement calling the series full of “deliberate mischaracterizations.”
  • See more stories on Insider’s business page.

Facebook fired back at the Wall Street Journal following the newspaper’s multi-part series that outlined employee concerns about a litany of issues at the social media giant, from the trafficking of humans through the site to turning a blind-eye to the mental health of teenagers.

“The Facebook Files,” published last week, found Facebook employees know the social media giant is “riddled with flaws.”

On Saturday, Facebook responded by slamming the series as full of “deliberate mischaracterizations” in a statement penned by Nick Clegg, the company’s vice president of global affairs.

“At the heart of this series is an allegation that is just plain false: that Facebook conducts research and then systematically and willfully ignores it if the findings are inconvenient for the company,” Clegg wrote.

The Journal reviewed internal company research reports, online employee discussions, and drafts of presentations made to management to reveal the platform ignored its impact on young women, maintained a system that protects elite users from being reprimanded for breaking content rules, and more. The investigation found a number of damning instances where researchers identified and escalated information about the negative effects of the platform where the company did not immediately react.

The report also revealed that Facebook spent 2.8 million hours, or approximately 319 years, looking for false or misleading information on its platforms in the US in 2020. Some content that was missed related to the promotion of gang violence, human trafficking, and drug cartels, the Journal said.

In one instance, Apple threatened to kick Facebook off its App Store following an October 2019 BBC report that detailed human traffickers were using the platform to sell victims. The new Journal investigation found that Facebook knew about the trafficking concerns prior to receiving pressure from Apple, with one researcher writing an internal memo stating that a team looked into “how domestic servitude manifests on our platform across its entire life cycle: recruitment, facilitation, and exploitation” throughout 2018 and the first half of 2019.

“With any research, there will be ideas for improvement that are effective to pursue and ideas where the tradeoffs against other important considerations are worse than the proposed fix,” Clegg wrote. “What would be really worrisome is if Facebook didn’t do this sort of research in the first place.”

Clegg concluded that the company “understands the significant responsibility” that comes with operating a platform that half of the people on the planet use.

He said Facebook takes that responsibility seriously, “but we fundamentally reject this mischaracterization of our work and impugning of the company’s motives.”

Read the original article on Business Insider

Making fun of anti-vaxxers who died of COVID-19 is a dark indication that we’ve all surrendered to the disease

anti-vaxx placard
An anti-vaxx placard.

  • Anti-vaccine figures are dying of COVID and their deaths are being made light of.
  • This is a distraction and represents an acceptance of COVID’s death toll.
  • We don’t have to be nice to anti-vaxxers, but we should counter them to control infections.
  • Abdullah Shihipar is a contributing opinion writer for Insider.
  • This is an opinion column. The thoughts expressed are those of the author.
  • See more stories on Insider’s business page.

It has become a familiar narrative during this pandemic: a vocal anti mask and vaccine advocate ends up dying in the hospital with COVID-19. Most recently, it was 30-year-old Calleb Wallace of Texas who died after a month-long bout with the disease. Wallace, who organized anti-mask protests and founded the San Angelo Freedom Defenders, died after being placed on a ventilator. He left behind a pregnant wife and two children.

Other times, it is anti-vaccine social media posts that catch the ire of the headlines. Stephen Harmon, a man from Los Angeles, repeatedly mocked the vaccine, tweeting “Got 99 problems but a vax ain’t one,” in June, only to die of the virus in July.

This trend, becoming ever more frequent as the delta variant spreads rapidly across the country, has inspired a series of jokes and memes on social media. I myself have partaken in a few “how it’s started, how it’s going” jokes. But ultimately, no matter how vile the target of the memes are and no matter how tempting it is to participate in the schadenfreude, it is an unhelpful distraction that represents a dark reality: we’re okay with how many people are dying from COVID-19.

Making light of the dark

Before I go any further, let me say that this is not an argument for empathy for those who are fiercely anti-vaccine or an attempt to try to understand their perspective. Nor is this an argument to persuade anti-vaccine advocates. Some of them have unfortunately gone down a destructive rabbit hole that even their loved ones find it difficult to help them out of quickly. Being nicer won’t necessarily change that, but neither does making fun of them after they have died.

Eighteen months ago, when the virus first hit the shores of the United States, we were all terrified. People made runs on grocery stores as hospitals filled up with the dead, people spent time at home to “stop the spread” and flatten the curve. Every death was seen as a tragedy, a death we could prevent with collective action. After a few months, right wing governors and talking heads began promoting the idea that protecting oneself from a pandemic was an individual responsibility. If people wanted to go mask-less,, attend gatherings, or skip the vaccine – that’s on them, they thought, ignoring the fact that the virus spreads from person to person. Of course, one person’s behavior during a pandemic affects the health of others.

A year later, with the vaccines plentiful, some on the left have adopted the right’s framing. It’s now accepted that COVID deaths, predominantly amongst the unvaccinated, are a matter of individual fate. You could have chosen to get that vaccine, but you didn’t, and that’s not my problem, they are implying.

This framing effectively prevents us from taking broader action to control the spread of the virus through mask mandates, restrictions, and testings. The abandonment of a collective framing around COVID-19 puts children who are not vaccinated and the immunocompromised at risk. It’s also easy to forget that despite it all, there are still unvaccinated people who need to be reached; there are still people who need help getting a shot, still people who are deathly terrified of side effects, and still those who can’t take time off to get a shot.

The lines between the anti-vaccine crowd and the unvaccinated in general have become blurred, and we have seen that in headlines that highlight unvaccinated people who have died from the virus. Average folks who were too busy and didn’t get around to it, were scared of side effects, or wanted to wait and see its effects.

If we want to battle anti-vaccine sentiment, rather than adopting the individualist framing that Republicans proposed to begin with, we should counter them while they are alive. We should show up to outnumber them and state that we are in favor of mask mandates at school board and city council meetings. We should pressure and boycott advertisers that advertise on programs that promote misinformation,. We should push for accountability measures for Facebook, which has long tolerated anti-vaccine misinformation on its platforms.

And of course, we should push for measures that will stop the spread of the virus, including but not limited to: mask mandates, vaccine mandates, vaccine sick leave, reducing prison populations and arrests (including immigration-related arrests), stopping evictions and getting real worker protections from OSHA. Instead, tangible actions have been abandoned for ridicule.

I have spoken to people who have lost family members to anti-vaccine conspiracy theories. These people are increasingly isolated and are shells of themselves, often being hostile and aggressive to their own kin. Families feel like they have been torn apart forever.

We must remember that there are family members who are feeling two waves of pain when they’re loved ones die: the physical loss, and the knowledge that their death could have been easily prevented.

It’s not easy to hear, but making fun of these deaths effectively means we have stopped resisting mass death and have accepted its reality. It’s tempting to think that there is some cosmic justice when an anti-vaxxer dies, but it’s just the reality of how a virus spreads. Yes, anti-vaxxers are dying, but so are scores of other people.I don’t write this just to lecture others, but rather to hold myself accountable. Using humor like this is an easy distraction and a façade from the shame we should feel that this is where our country is at during such a late stage in the pandemic.

There will be more anti-vaccine people who die of this illness in the months to come, and I’m going to try to resist the urge to make light of it. The real joke is on us all.

Read the original article on Business Insider

Nicki Minaj’s vaccine tweets were the latest test of social media’s imperfect system for stopping misinformation’s spread

Nicki Minaj
Twitter has not deleted or added a warning label to Nicki Minaj’s tweet containing unverified information about COVID-19 vaccines.

  • Twitter has not deleted or added a warning to Nicki Minaj’s tweet containing COVID misinformation.
  • Twitter said Minaj’s tweet will stay up because the rapper shared a personal anecdote.
  • When celebrities post misinformation, the result can be “incredibly damaging” to public health, one expert said.
  • See more stories on Insider’s business page.

Twitter’s response to Nicki Minaj’s bizarre post claiming that a COVID-19 vaccine caused her cousin’s friend’s testicles to swell shows how the platform uses patchwork policies in curbing misinformation.

The rap superstar said in several tweets this week that she has not gotten vaccinated against COVID-19 yet, in part because she wants to do more research after hearing a story from her cousin.

“My cousin in Trinidad won’t get the vaccine cuz his friend got it & became impotent,” Minaj wrote to her 22.7 million Twitter followers. “His testicles became swollen. His friend was weeks away from getting married, now the girl called off the wedding.”

No clinical studies of any COVID-19 vaccines being administered have linked the shot to impotence. The Centers for Disease Control and Prevention has not found any vaccine, including the one for COVID-19, to lead to fertility problems, and encourages pregnant people or those who may become pregnant to get a shot.

Though Minaj said on Instagram she was put in “Twitter jail” and unable to post, Twitter denied locking her account. The company also told Insider’s Isobel Hamilton Minaj’s tweet will stay up because she shared a personal anecdote. Content that states COVID-19 misinformation as a fact may violate Twitter’s policy, Twitter said.

Minaj’s tweet appears to have led to her fanbase to target public health officials: A small group of Minaj’s followers, who call themselves “Barbz,” protested outside the CDC’s Atlanta office. Chief Medical Advisor to the president Anthony Fauci even got involved, denying Minaj’s claim that the vaccine could lead to impotence.

Dr. Joe Smyser, PhD, MSPH, Chief Executive Officer of The Public Good Projects, told Insider when celebrities like Minaj post misinformation, the result can be “incredibly damaging” to public health. Smyser said followers of a celebrity trust them, and view them as an authentic source for information.

“So when health authorities are put in the position of having to refute misinformation from a celebrity, and they definitely have to do this, it’s a lose-lose for everybody,” Smyser said.

That’s reflected in the statement made by Terrence Deyalsingh, Trinidad and Tobago’s minister of health, following Minaj’s tweets.

“What is sad about this is that it wasted our time yesterday trying to track it down, because we take all of these claims seriously,” Deyalsingh said. “As we stand now there is absolutely no reported such side effect or adverse effect of testicular swelling in Trinidad … and none that we know of anywhere else in the world.”

Twitter has released numerous initiatives and tools to combat the spread of false information, but a 2020 report from Oxford University found nearly remained without a warning label.

The White House has recently pointed to online misinformation as a roadblock to getting more Americans vaccinated.

Though she expressed her hesitation about getting a vaccine herself, Minaj tweeted previously she recommends people get one for work and that she will likely get a jab herself once she goes on tour.

In the past, Twitter has put warning labels on posts containing misinformation from prominent accounts.

The company labeled multiple posts from former President Donald Trump before permanently suspending his account. Twitter allows accounts those in government or running for office, to violate its Civic Integrity Policy due to public interest.

But early in the pandemic, Twitter faced criticism when it added coronavirus misinformation warnings to tweets unrelated to the virus but that used terms like “5G” – the basis of a popular conspiracy theory at COVID’s onset – or “oxygen.”

Misinformation expert John Cook previously told Insider Twitter should be careful in using warnings under tweets or users could become cynical and inattentive when seeing them.

“We need these kind of warnings to be more surgical,” Cook told Insider. “We want to bring down the misinformation, but not hurt accurate information.”

Twitter was not available for additional comment.

Read the original article on Business Insider

Facebook spent the equivalent of 319 years labeling or removing false and misleading content posted in the US in 2020

facebook messenger instagram
  • Facebook spent 2.8 million hours, or approximately 319 years, looking for false or misleading information on its platforms in the US in 2020.
  • Some content that was missed included gang violence, human trafficking, and drug cartels, according to the Wall Street Journal.
  • Some groups that promote hate or violence use fake Facebook and Instagram accounts and Messenger to recruit users.
  • See more stories on Insider’s business page.

Facebook spent the equivalent of 319 years labeling or removing false and misleading content posted in the US in 2020, according to a Wall Street Journal report.

Employees of the social media company have raised concerns about the spread of harmful and misleading information on its platforms, according to internal documents accessed by the Journal.

The documents detail how employees and contractors of the company spent more than 3.2 million hours in 2020 searching for, labeling, and taking down information that was false or misleading. 2.8 million of those hours, or approximately 319 years, were spent looking for content posted within the US. Facebookspent three times that amount of time on “brand safety,” or “making sure ads don’t appear alongside content advertisers may find objectionable,” according to the Journal.

The information accessed by the Journal explains details on Facebook’s oversights related to issues like gang violence, human trafficking, drug cartels, and the spreading of violent and often deceptive information. A recent study found that posts with misinformation got six times more likes, shares, and interactions on Facebook compared to posts from more reputable news sources.

Some of the largest recruitment tactics these violent groups use include fake Facebook and Instagram accounts and Messenger, according to the documents reviewed by the Journal.

A spokesperson from Facebook told the Journal that the company also plans to look into artificial intelligence that will help them in its efforts against the spread of misinformation or violent content.

Facebook removed nearly 6.7 million pieces of organized hate content off of its platforms from October through December of 2020. In March, Facebook announced it would stop recommending Groups that have violated Facebook’s standards and limit the distribution of the group’s content in users’ News Feeds, Insider reported. At the same time, Facebook also started telling its users when a Group they are about to join has violated the company’s Community Standards. Facebook’s standards prohibit violent, harmful, illegal, and deceptive posts from being shared on the site.

Read the original article on Business Insider

Facebook has a secret system granting 5.8 million high-profile users immunity from its rules, a new report says

Mark Zuckerberg, Facebook
Facebook CEO Mark Zuckerberg in New York City on Friday, Oct. 25, 2019.

  • Facebook has a system protecting elite users from being reprimanded for breaking content rules, the Wall Street Journal reports.
  • The company’s “XCheck” system has protected Donald Trump, Doug the Pug, and other “influential” figures.
  • But Facebook employees have expressed disapproval with giving special treatment to users, the WSJ reports.
  • See more stories on Insider’s business page.

Facebook has a secret internal system that exempts 5.8 million users from having to follow the rules on its platform, according to the Wall Street Journal.

The paper on Monday published an investigation detailing how high-profile users on its services who are “newsworthy,” “influential or popular, or “PR risky” don’t see the same enforcement action as do ordinary users, citing company documents it had viewed.

A former Facebook employee said in a memo that the company “routinely makes exceptions for powerful actors,” per the Journal.

Figures like former President Donald Trump, soccer star Neymar da Silva Santos JĂºnior, Sen. Elizabeth Warren, and even Doug the Pug are covered by the system, nicknamed “XCheck” or “cross check.” The system was created in response to the shortcomings of Facebook’s dual human and AI moderation processes.

But as The Journal reported, XCheck has led to a bevy of other problems.

When users are added to it, it’s more difficult for moderators to take action against them, like with Neymar, who posted his WhatsApp communication with a woman who accused him of rape to his Facebook and Instagram accounts. The screenshots showed her name and nude photos of her.

Neymar’s sharing of “nonconsensual intimate imagery” would have prompted Facebook to delete the post, but since Neymar was covered by XCheck, moderators were blocked from removing the content, which was then seen by 56 million online users.

Less than 10% of the content that XCheck flagged to the company as needing attention was reviewed, per a document reported by the paper. Facebook spokesperson Andy Stone told the Journal that the number grew in 2020 but did not provide evidence to support that assertion.

Most Facebook employees have the power to add users to the XCheck system for whitelisting status, a term used to describe high-profile accounts that don’t have to follow the rules. But the Journal viewed a 2019 audit that found Facebook doesn’t always keep a record of who it whitelists and why, which poses “numerous legal, compliance, and legitimacy risks for the company and harm to our community.”

Facebook employees, including an executive who led its civic team, expressed disapproval with the company’s practice of doling outs special treatment for some users and said it was not in alignment with Facebook’s values, the paper reported.

“Having different rules on speech for different people is very troubling to me,” one wrote in a memo viewed by the Journal. Another employee also said Facebook is “influenced by political considerations” when making content moderation decisions, the paper reported.

Facebook acknowledged XCheck and its downfalls years ago and told the Journal that it’s trying to terminate its whitelisting practice. Company documents also show Facebook’s intention to eradicate the system – a product manager proposed a plan to stop allowing Facebook employees to add new users to XCheck as a solution.

Some of the company documents will be handed over to the Securities and Exchange Commission and Congress, with that person requesting federal whistleblower protection, per the WSJ.

Zuckerberg has long touted one of his signature taglines: that Facebook’s leaders don’t want the platform to be the “arbiters of truth” or to decide what is true or false and then leave up or remove content accordingly.

But that hands-off approach has tossed Facebook into a thorny position, especially in recent years, as critics say misinformation runs rampant on the site and some Republicans crusade against the company for serving a liberal agenda and discriminating against conservatives online.

Facebook has rolled out several moves in light of that scrutiny – reports surfaced in June that Facebook would stop granting politicians special treatment from enforcing its content rules.

Facebook did not immediately respond to Insider’s request for comment.

Read the full report from The Wall Street Journal here.

Read the original article on Business Insider

Reddit banned an anti-vaccine, anti-mask community after 135 of its biggest forums protested

reddit blind community
Reddit bans a subreddit that often spreads COVID-19 misinformation.

  • Reddit banned the subreddit r/NoNewNormal after 135 subreddits launched a protest.
  • The subreddit was a source of misinformation about COVID-19, masks, and vaccines.
  • r/NoNewNormal was banned for intruding into COVID-related discussions on other subreddits.
  • See more stories on Insider’s business page.

Reddit announced on Wednesday that it banned the subreddit r/NoNewNormal, known for its anti-vax and anti-mask content, after 135 other subreddits protested the company’s refusal to ban COVID-19 disinformation.

An admin post on r/Security stated that r/NoNewNormal was banned for brigading– or intruding onto discussions on other subreddits about COVID-19 and related topics, often to harass users. The subreddit was connected to 80 “brigades.”

Reddit also stated that the company will create a new reporting feature that will allow moderators to more easily tell the company when they see “community interference.”

54 other subreddits were also “quarantined,” meaning users see a warning before entering the subreddit and the subreddit’s content won’t be featured on the Reddit homepage. Though r/NoNewNormal has been quarantined since early August and had recieved warnings from moderators, the subreddit’s behavior didn’t budge.

Reddit’s CEO had previously said that Reddit should be a place for open discussion, drawing criticism for allowing dangerous misinformation to spread during a pandemic, leading some of the biggest subreddits to protest.

Subreddits such as r/Futurology and r/pokemongo decided to “go dark” and block non-members from joining or viewing the subreddit’s content in protest of Reddit’s reluctance to ban disinformation. Many specifically called out r/NoNewNormal as a source of misinformation that should be shut down.

Weaponized misinformation is a key problem shaping our Future. Reddit won’t enforce their policies against misinformation, brigading, and spamming. Misinformation subreddits such as NoNewNormal and r/conspiracy must be shut down. People are dying from misinformation,” the moderators of r/Futurology wrote.

Read the original article on Business Insider