‘The most powerful person who’s ever walked the face of the earth’: How Mark Zuckerberg’s stranglehold on Facebook could put the company at risk

Mark Zuckerberg
Facebook CEO Mark Zuckerberg.

  • Facebook CEO Mark Zuckerberg has 55% of the company’s voting shares, giving him majority power.
  • Experts say it’s “a bad idea” for one person to hold so much control of a behemoth like Facebook.
  • The notion is more profound given leaked docs that show Facebook’s dismissal of its societal harm.

Facebook CEO Mark Zuckerberg holds 55% of voting shares in the company, former employee and whistleblower Frances Haugen brought up in front of Congress last week.

That’s significant, according to her, because it’s “a very unique role in the tech industry. She said “there are no similarly powerful companies that are as unilaterally controlled.” Or put more bluntly in another part of the hearing: “There is no one currently holding Mark accountable but himself.”

He is, essentially, Facebook’s “key man,” a person who has ultimate say in business decisions and without whom the firm would be heavily impacted.

Experts told Insider that there is cause for concern around one person having control over a controversial family of platforms that affect hundreds of millions of people.

“I don’t think it’s a stretch to argue that Mark Zuckerberg is the most powerful person who’s ever walked the face of the earth, and I think that kind of power being held by one person is generally a bad idea,” Whitney Tilson, a former hedge fund manager and CEO of Empire Financial Research, told Insider.

Zuckerberg’s outsized power has been debated for years

Company founders with majority control of a firm aren’t rare. It’s most prevalent in the tech world, where dual-class structures are common. Google founders Sergey Brin and Larry Page, for example, stepped away from the giant in 2019 but remained on the board and collectively still have majority control of the company.

“You typically don’t have companies like GM or Ford or Bank of America that are controlled by any one investor,” Chris Haynes, an associate professor of international affairs and political science at the University of New Haven, told Insider. “This is not the norm.”

Proponents of the arrangement often say it allows company leaders to stay trained on long-term success without distractions from short-term pressures.

“Having a company controlled by a single person, ‘the brainchild’ when it comes to tech companies, it does make the company much more nimble, and they’re able to really turn a dime,” Haynes said, since they don’t have to get a lot of investors on board to make a decision.

But it can also slow things down. Facebook lists Zuckerberg’s outsize control as a potential risk factor for investors, saying it “could delay, defer, or prevent a change of control, merger, consolidation, or sale of all or substantially all of our assets that our other stockholders support.”

Critics say the control can shield companies from concerns that can harm society and investors, and it can cause volatility.

“I think you’re seeing that in the case of Facebook,” Haynes said.

Facebook has had a rocky few weeks after documents leaked by Haugen to the Wall Street Journal showed Zuckerberg and other insiders knew the company’s platforms had negative effects on the public and dismissed those concerns.

“Facebook is bigger than any religion in the history of the world, and there is 100% control residing in one man,” Tilson said.

Joy Poole, a former Facebook employee who’s now at the consulting firm Emergence, told Insider that lawmakers should “absolutely” explore how regulation could decide the amount of majority share CEOs have in their companies. But there may be more pressing issues.

“Mark Zuckerberg has majority control over a company with a tremendous amount of influence in the world,” Poole said. “I don’t believe for a minute though, that if he had 49% control, that we would magically find answers to the complex questions we are facing here.”

However, last week’s hearing with Haugen and her disclosure of internal documents to the US Securities and Exchange Commission could change things, Tilson said, though it’s not likely.

If the SEC investigation finds that Facebook misled investors by failing to disclose research of negative harm on teens, among other findings, then the agency could demand that he step down, according to Tilson.

“That would be the only way I can think of that would overcome his controlling voting shares,” he said.

Read the original article on Business Insider

Facebooks bad few weeks could leave it facing 2 threats even bigger than new laws, analyst says

Facebook CEO Mark Zuckerberg Testifies Before The House Financial Services Committee
Facebook CEO Mark Zuckerberg.

  • Facebook’s brutal few weeks could lead to class action lawsuits and state attorneys general-led probes.
  • Analyst Blair Levin of New Street Research said that is a greater threat than potential regulation.
  • Litigation could force Facebook to share documents, which could include “problematic evidence.”

Facebook has had a brutal few weeks, and lawmakers now have increasingly more fodder to finally reign in Big Tech.

But the social network actually faces two threats bigger than potential federal regulation. According to Blair Levin of New Street Research, Facebook’s spate of bad press following leaks from a whistleblower and her subsequent congressional testimony could incite new class-action lawsuits or investigations by state attorneys general.

That’s because the court could force Facebook to make internal documents public, which could “yield even more problematic evidence for Facebook,” Levin said in an interview. Litigation could also cost the company a pretty penny – more than legislation would cost it – and a settlement could “address the issues more quickly than legislation.”

“In a Senate hearing, the strategy is to run out the clock,” Levin said. Tech CEOs, including Mark Zuckerberg, have appeared in such settings. “You can’t do that in a deposition,” where a company is required to share details under oath as part of litigation proceedings, he continued.

Levin said that more class action and state AG-led litigation could come from Facebook’s recent scandals, which began when whistleblower and former employee Francis Haugen shared internal documents with The Wall Street Journal and later testified before Congress that the firm prioritizes profits over people’s safety.

The Journal reported, among other things, that Facebook knew its Instagram platform was negatively impacting the mental health of young users, especially teenage girls.

Facebook has disclosed internal material in the past through legal battles, like when the Federal Trade Commission in 2021 accused the company of buying WhatsApp and Instagram to neutralize competition.

A judge eventually threw out, saying the FTC had insufficient evidence proving that Facebook is a monopoly, but not before emails Zuckerberg had written years ago were made public. “It is better to buy than compete,” Zuckerberg wrote in a 2008 email, according to the now-dismissed lawsuit.

According to the FTC, these messages showed he and others in leadership perceived Instagram and WhatsApp as rivals before Facebook bought them.

Levin said the most recent issues surrounding kids on Facebook’s platform are different because “more Americans care more about their children than they care about democracy, privacy, misinformation, and violence abroad.” The hearing lent credibility to the belief that it’s impossible for the tech industry to self-regulate.

Levin said possible litigation arising from the current news cycle could include lawsuits from parents who claim their children experienced trauma on a platform like Instagram. He said suits like that “are very hard to win,” but are still “problematic” for the company.

“These things don’t happen overnight,” Levin said, but we might start seeing litigation in response to Facebook’s recent problems within three to six months.

Read the original article on Business Insider

Teens really hate Facebook and Twitter

Jack Dorsey Twitter CEO
Twitter CEO Jack Dorsey

  • Twitter and Facebook are teenagers’ least favorite social media platforms, a new survey found.
  • Instagram and TikTok were their favorites, and 81% of respondents said they used the former.
  • Teens are assets to social media companies as they seek to recruit more users.
  • See more stories on Insider’s business page.

The youngsters have spoken, and their verdict? Twitter and Facebook are their least favorite social media platforms.

Piper Sandler published its bi-annual “Taking Stock With Teens” survey Tuesday, which asked 10,000 US teenagers a wide range of questions about various brands and services cross a plethora of industries.

One of them was which social service they favored most, a title that went to Snapchat. 35% of respondents said the app was their favorite, followed by 30% for TikTok and 22% for Instagram.

Just 2% said Twitter or Facebook was their favorite.

Twitter has become a hotspot for politicians, journalists, and celebrities, while Facebook has been overrun by Baby Boomers as users in the younger generation flock for photo and video-sharing platforms like TikTok and Instagram.

Instagram was the most used service among teens, according to the survey, with 81% saying they use the platform. Snapchat and TikTok came in second and third, respectively.

Social media companies are hard-pressed to tap into the next generation of young users and recruit them to use their services. Facebook’s Instagram has copied TikTok, beloved by Gen Z-ers, and added a short video feature to its platform.

Instagram was also working on a separate kids-centric app before pausing those plans due to backlash from lawmakers and critics. Whistleblower Frances Haugen gave internal documents to the Wall Street Journal, which showed that Facebook was aware Instagram had negative mental health effects on teenage girls and failed to disclose those findings to Congress in previous hearings.

Instagram said its kids-focused app would equip parents with special mechanisms to help them supervise their children while using it.

Read the original article on Business Insider

Facebook is reportedly holding off on new product launches that could hurt its reputation after a brutal few weeks

Mark Zuckerberg, Facebook
Facebook CEO Mark Zuckerberg in New York City on Friday, Oct. 25, 2019.

  • Facebook is holding off on some product launches to conduct “reputational reviews” first.
  • The Wall Street Journal reports that the company wants to examine if products could invite new criticism.
  • The report comes after a whistleblower testified over claims against the company this week.
  • See more stories on Insider’s business page.

Facebook is holding off on new product launches so it can survey if they may harm the company’s public image, according to a Wall Street Journal report Wednesday.

Sources told the paper that Facebook is conducting “reputational reviews” to help the company determine if products will leave it vulnerable to public criticism. They also said the reviews are to make sure the products won’t negatively affect young users.

Some sources also told the paper that there’s an internal team currently analyzing internal research that could damage the company’s optics if exposed.

Facebook did not immediately respond to Insider’s request for comment. Spokesperson Andy Stone told the Journal that Facebook is working to understand its internal research better.

The news comes after Facebook-owned Instagram’s delay of a kids-centric version of the app after whistleblower Frances Haugen leaked internal documents to the Journal that showed that Facebook, through its internal research, was aware its Instagram was harming the mental health of teenage girls.

Haugen testified before Congress on Tuesday and said she’d be “sincerely surprised” if Facebook doesn’t continue working on Instagram Kids.

Haugen shared the documents with the Senate as well and testified that Facebook routinely chooses “profits over people.” She said there needs to be a federal regulatory body to oversee Facebook and other internet platforms.

The Journal published a series of articles based on the leaked documents – named the Facebook Files – beginning in September. The reports garnered widespread backlash from the public and lawmakers about the harm that Facebook can pose.

Facebook has pushed back against the Journal’s reporting and against Haugen’s characterization of the company in her testimony. CEO Mark Zuckerberg responded Tuesday, saying “the argument that we deliberately push content that makes people angry for profit is deeply illogical. We make money from ads, and advertisers consistently tell us they don’t want their ads next to harmful or angry content.”

Read the original article on Business Insider

The Facebook whistleblower hearing wasn’t about cries of censorship or other political theater for once – it was about the company’s unhealthy reliance on AI

rances haugen facebook whistleblower hearing
Former Facebook employee Frances Haugen testifies on Capitol Hill October 5, 2021 in Washington, DC.

  • On Tuesday, there was a Big Tech congressional hearing void of tedious political gestures.
  • Republicans largely avoided complaining about censorship as they have in other hearings.
  • Instead, whistleblower Frances Haugen stayed focused on the potential harm posed by algorithms.
  • See more stories on Insider’s business page.

Congress held a hearing Tuesday centered around a big tech company – and it didn’t devolve into political theater.

That’s a major shift from legislators’ track record of hearings involving tech platforms in recent years.

Republican lawmakers typically use their talking time to question witnesses over what they say is tech’s crusade to silence conservatives by removing their posts. Democrats often make a point to apologize to witnesses for their colleagues’ behavior before lambasting the companies over not doing enough to remove more posts containing hate speech and lies.

But the whistleblower hearing was void of that usual political tug-of-war, and former Facebook employee Frances Haugen kept two things center stage in her answers: artificial intelligence and algorithms. And lawmakers on both sides of the aisle had productive, relevant questions prepared for her.

Facebook uses computers to help decide what kinds of content to show users. The platform looks at what posts they engage with the most and uses those findings to surface similar content in News Feed that people are likely to also engage with. The goal is to keep people on the platform, putting more ad dollars in Facebook’s pockets.

But the pitfalls of such computer-driven platforms is that the most engaged with content is usually angry, divisive, sensationalistic posts that contain misinformation, Haugen said, and using machines to elevate it to more people is dangerous.

“If we had appropriate oversight or reformed 230 to hold Facebook responsible, I think we could get rid of engagement-based ranking,” Haugen said Tuesday, referring to the industry term for algorithms that choose content for you.

Facebook’s way of surveilling post engagement can be traced back to a 2018 algorithm change that favored divisive content, which worried employees according to the internal documents that Haugen gave to the Wall Street Journal. Facebook made the change after it found people were spending less time on the platform, the paper reported.

“I strongly encourage reforms that push us human-scale social media and not computer-driven social media,” Haugen said. “Those amplification harms are caused by computers choosing what’s important to us, not friends and family.”

Tuesday’s hearing may have given us fewer politics, but it wasn’t completely without. Just one Republican senator brought up alleged censorship, and Haugen wasn’t swayed.

ted cruz senate hearing facebook whistleblower
Sen. Ted Cruz (R-TX) questions former Facebook employee Frances Haugen on October 5, 2021 in Washington, DC.

Sen. Ted Cruz of Texas asked Haugen if she’s concerned about Facebook and tech’s “pattern of engaging in political censorship,” which NYU researchers disproved in a report earlier this year.

In response, she simply stayed focused on how algorithms amplify misinformation.

Haugen said the “mechanisms of amplification” need to be changed, perhaps by organizing News Feed content in order of the time that it is posted.

“We don’t want computers deciding what we focus on,” Haugen said in response to Sen. Cruz’s question. “We should have software that is human-scaled, where humans have conversations together, not computers facilitating who we get to hear from.”

Read the original article on Business Insider

Experts say it’s ‘theoretically possible’ the Facebook outage was connected to a whistleblower’s bombshell claims against the company – but not likely

Facebook whistleblower Frances Haugen testifies to senate committee
Facebook whistleblower Frances Haugen testifies to senate committee

  • Hours after an interview with a Facebook whistleblower aired, the firm suffered a widespread outage.
  • Experts told Insider the two events likely aren’t connected and are merely a “coincidence.”
  • But they also said it’s “theoretically possible,” and hackers could have attacked to “make a point.”
  • See more stories on Insider’s business page.

Facebook and its carousel of apps suffered a sweeping six-hour outage Monday in what became a high-profile event for the firm.

The night before, a CBS “60 Minutes” interview with former Facebook employee Frances Haugen aired, with the whistleblower detailing how she believes the company routinely prioritized profits over the safety of its users after leaking internal documents.

Experts told Insider it’s not likely that the two events are connected, but that doesn’t mean it’s not impossible.

Purandar Das, president and co-founder of the data security firm Sotero, told Insider that it’s possible hackers or an activist group targeted the platform “to make a point,” though if that were the case, the attackers would have claimed responsibility for it by now.

“I would tend to lean towards the coincidence angle,” Das said. “Keep in mind, Facebook has already been hacked and has lost millions of user records. We will find out in the near term if this was an attack because the attackers will not stop.”

Das also said hackers moving forward could “even look to exfiltrate internal documents and communications to publicly embarrass the company.”

Ryan Lloyd, a security strategist at the mobile application security firm GuardSquare, told Insider that it’s “theoretically possible” that someone could have purposefully caused a disruption, but it’s difficult to say for certain.

“This could have very easily been an accident that was the byproduct of some intentional work and the DNS configuration settings in question here were collateral damage of some other initiative,” Lloyd said. “But it’s hard to know for sure without really being on the inside and seeing what’s going on.”

On Monday, Facebook and its WhatsApp, Instagram, and Messenger services were inoperable for hundreds of thousands of users as well as scores of Facebook employees who couldn’t access the company’s internal servers and communications systems.

Instagram CEO Adam Mosseri equated the situation to a “snow day” for employees who were unable to do any work.

Facebook vice president of infrastructure, Santosh Janardhan, said Tuesday that the outage was “an error of our own making” and blamed a routine maintenance job for knocking the company’s servers offline. The outage was not due to any malicious activity, he said.

On Tuesday, Haugen testified before Congress regarding the Facebook documents she exposed, claiming that regulatory action is needed against the company.

Haugen said she doesn’t know why the services went down. “But I know that for more than five hours, Facebook wasn’t used to deepen divides destabilize democracies, and make young girls and women feel bad about their bodies,” she said.

“It also means the millions of small businesses, weren’t able to reach potential customers, and countless photos of new babies weren’t joyously celebrated by family and friends around the world,” she said.

Read the original article on Business Insider

Facebook’s whistleblower is testifying before Congress – here are the most important moments so far

Facebook whistleblower Frances Haugen arrives at Senate hearing
Frances Haugen, a former Facebook employee, arrives to testify on Tuesday, October 5, 2021.

  • Former Facebook employee and whistleblower Frances Haugen will testify before Congress Tuesday.
  • The hearing comes after she leaked internal documents showing the company’s controversial practices.
  • She’s expected to testify that Facebook prioritized profits over stopping extremism and division.
  • See more stories on Insider’s business page.

Facebook whistleblower Frances Haugen is testifying before Congress Tuesday after leaking internal documents showing the tech giant’s controversial business practices.

Haugen shared the documents with the Wall Street Journal that in part showed Facebook knew Instagram negatively impacted the mental health of its young users, especially teenage girls. It also showed employees were worried that a 2018 algorithm change further promoted sensationalistic and divisive content to users.

Facebook consistently resolves conflicts “in favor of its own profits,” Haugen said in her opening remarks. “The result has been more division, more harm, more lies, more threats, and more combat.”

You can watch the hearing on the US Senate’s website here.

Haugen referenced Monday’s sweeping Facebook outage

In her opening remarks, Haugen said she doesn’t know why the services went down. “But I know that for more than five hours, Facebook wasn’t used to deepened divides destabilize democracies and make young girls and women feel bad about their bodies,” she said.

“It also means the millions of small businesses, weren’t able to reach potential customers, and countless photos of new babies weren’t joyously celebrated by family and friends around the world,” she said.

‘The buck stops with Mark’

Haugen said CEO Mark Zuckerberg holds more than half of all voting shares for Facebook, giving him unilateral control over the company. In that sense, “the buck stops with” him when making major decisions at Facebook.

“There is no one currently holding Mark accountable but himself,” Haugen told Congress.

This story is developing. Check back for updates.

Read the original article on Business Insider

How tech platforms were dragged into America’s polarized political tug-of-war

A blue Democratic donkey and red Republican elephant playing tug of war with Google, Twitter, Instagram, and Facebook logos wrapped in the center on a gray background.
Facebook, Google, Twitter, and others have become targets on Capitol Hill, with lawmakers using them to push their agendas.

  • Lawmakers have weaponized tech firms and their content moderation decisions to drive agendas.
  • It’s the culmination of a slew of factors, like the post-2016 techlash and the Trump administration.
  • Experts say tech animosity has become “a core Republican tenet,” and progressives want more rules.
  • See more stories on Insider’s business page.

Tech companies haven’t had an easy time lately, with lawsuits and critique lobbed at them.

But the platforms have also been dragged into a new war in recent years: lawmakers using them and the decisions they make as punching bags to drive their political agendas.

Experts told Insider it’s the product of the post-2016 election realization that online platforms were not all benign, a Trump-era political marketing test, internet platforms’ shift from their historical hands-off approach to content moderation, and mounting polarization in a country where a political tug-of-war was growing ever nastier.

“We’ve always seen polarization in the US,” Ari Lightman, professor of digital media at Carnegie Mellon and social media expert, told Insider. “Social media companies just escalate that.”

Republicans and Democrats want Big Tech reined in – for very different reasons

facebook mark zuckerberg
Facebook CEO Mark Zuckerberg at a Senate hearing in 2018.

One of the first major instances of Trump accusing a tech company of anti-conservative bias was in August 2018, when he said Google was promoting former President Barack Obama’s speeches ahead of his own in search results.

“Politicians are always looking for successful marketing, and he was testing the idea,” John Samples – a vice president of the CATO Institute and a member of Facebook’s Oversight Board – told Insider.

It worked, and from that point on, every decision that companies made around what to flag, remove, or keep up on their sites became another data or talking point to support a cause.

For conservatives, that cause was the belief that internet platforms are hellbent on silencing them. And for progressives, the argument that tech platforms don’t do enough to crack down on false facts and hate speech dates back to Obama-era scholars, Samples said.

Once the 2016 US presidential election came around, it didn’t just spawn the “techlash” – it produced a president whose favorite messengers were the very internet platforms he would end up crusading against, and “antipathy toward social media elites became a core Republican tenet,” Samples said.

The divisive tone on social media became even more pronounced by the 2020 election cycle. Republicans repeated Trump’s unfounded claims that the election was stolen, riling up a base that was already heated after a year of pandemic-driven safety protocols. Democrats had to use their platforms to repeat that it was the most secure election in history. Both sides were shouting into a void of followers who already believed what they were saying.

And through it all, members of Congress began pouncing on opportunities to grill tech CEOs, which often devolved into political theater, even though some of tech’s biggest critics in Washington happily use the platforms to their advantage to win elections.

twitter jack dorsey john kennedy
Twitter CEO Jack Dorsey and Sen. John Kennedy at a November hearing in 2020.

After Zuckerberg reportedly said he’d “go to the mat and fight” threats to break up the company, Democratic Rep. Alexandria Ocasio-Cortez tweeted that his comments signaled he was against keeping corporate power and monopolies in check.

Sen. Elizabeth Warren tweeted last month that “no company should be too big to be held accountable for spreading misinformation” after the Wall Street Journal reported an algorithm change favored divisive false content.

Republican Reps. Madison Cawthorn and Marjorie Taylor-Greene and Sens. Josh Hawley and Ted Cruz are some of the loudest voices posting about alleged censorship.

Cruz in January tweeted that “Big Tech’s PURGE, censorship & abuse of power is absurd & profoundly dangerous,” after platforms began suspending Trump following the January 6 Capitol insurrection.

“Some of that is just politics, some of that is a general reaction,” Paul Barrett, a deputy director at NYU’s Stern Center, told Insider. Barrett was among the NYU researchers who published a report that disproved conservatives’ claims of anti-right discrimination online.

Social media companies and the rules they enforce are now inextricably subject to vicious political judgment.

Zuckerberg “went from being angelic to being Satan, and it happened in three or four years,” Samples said. “But it’s really tied up in the politics of the country.”

Read the original article on Business Insider

YouTube bans all anti-vaxx content – not just misinformation about COVID-19 shots

Sundar Pichai
Google CEO Sundar Pichai.

  • YouTube said it is banning all content claiming that approved vaccines do not work or are harmful.
  • That includes vaccines for illnesses other than the coronavirus disease for the first time.
  • The ban is a departure from the industry’s historical hands-off approach to content moderation.
  • See more stories on Insider’s business page.

YouTube is banning all anti-vaccine content on its platform, including misinformation about approved vaccines for common illnesses in addition to COVID-19, the company said Wednesday.

The Google-owned social media platform will remove any video that attempts to describe well-known vaccines that are approved by federal health officials as being harmful, it said in a blog post first reported by the Washington Post. That includes content claiming vaccines can cause autism, cancer, infertility, or can allow the recipient of the vaccine to be tracked via microchip.

YouTube previously had banned false information surrounding the coronavirus vaccines in October 2020. The company said it will still allow discussion around vaccine policies, new vaccine trials, and personal accounts of receiving the vaccine.

A YouTube spokesperson also confirmed to Insider that the company will remove the accounts of high-profile anti-vaxxers like Robert F. Kennedy Jr., the nephew of former President John F. Kennedy, and anti-vaccine activist and author Joseph Mercola.

Kennedy Jr. was one of 12 people that a recent report found to be the most prolific spreaders of COVID-19 disinformation online.

Wednesday’s expansion of rules related to vaccine content marks a major change in how the company handles content on its service.

“Developing robust policies takes time,” Matt Halprin – YouTube’s vice president of global trust and safety – told the Post. “We wanted to launch a policy that is comprehensive, enforceable with consistency and adequately addresses the challenge.”

YouTube and other social media companies have long taken a hands-off approach to moderating content. But pressure has increased from regulators and the general public in recent years, especially amid the pandemic and 2020 presidential election, for platforms to more actively police disinformation on their websites.

Facebook and Twitter have also moved to limit the spread of COVID-19 vaccine misinformation online. Still, false content has still leaked through – private groups devoted to discussing and taking proven COVID-19 treatments like the horse drug Ivermectin proliferated, Insider reported in early September.

Companies also began cracking down on former President Donald Trump’s false statements in 2020, thrusting the topic of social media platforms’ content moderation into an ongoing political war.

Read the original article on Business Insider

Facebook’s AI moderation reportedly can’t interpret many languages, leaving users in some countries more susceptible to harmful posts

facebook ceo mark zuckerberg
Facebook CEO Mark Zuckerberg in 2018.

  • Facebook’s automated content moderators can’t speak many languages used on the site.
  • Human moderators also can’t speak languages used in some foreign markets Facebook has moved into.
  • The blind spots sometimes let bad actors post harmful, violent content and conduct illegal business.
  • See more stories on Insider’s business page.

Facebook’s artificial intelligence-powered content moderators can’t read some languages used on the platform, raising concerns about how the company is policing content in countries that speak languages other than English, The Wall Street Journal reported Thursday.

The paper viewed company documents that show Facebook doesn’t have enough employees capable of speaking local languages to monitor happenings in other countries, markets that the company has expanded into to bolster its non-US userbase. More than 90% of Facebook’s monthly users are outside North America, per the paper.

The report shows how the lack of human moderators with multi-lingual skills – combined with the shortcomings of relying on robots to weed out toxic posts – is weakening Facebook’s ability to monitor harmful content online, a topic that has brought it under heavy scrutiny for the company in recent years.

Facebook employees have expressed concerns about how the system has allowed bad actors to use the site for nefarious purposes, according to the documents viewed by The Journal.

A former vice president at the company told the paper that Facebook perceives potential harm in foreign countries as “simply the cost of doing business” in those markets. He also said there is “very rarely a significant, concerted effort to invest in fixing those areas.”

Drug cartels and human traffickers have used Facebook to recruit victims. One cartel, in particular, poses the biggest criminal drug threat to the US, per US officials, and used multiple Facebook pages to post photos of violent, graphic scenes and gun imagery. An internal investigation team wanted the cartel banned completely, but the team tasked with doing so never followed up, per the report.

In Ethiopia, groups have used Facebook to incite violence against the Tigrayan people who are victims of ethnic cleansing. That content slipped through the cracks due to a lack of moderators who speak the native language. The company also hadn’t translated its “community standards” rules to languages used in Ethiopia, per the Journal.

And most Moroccan Arabic-speaking Facebook moderators aren’t able to speak other Arabic dialects, which allowed violent content to remain up.

In most cases, Facebook took down harmful posts only when they garnered public attention and hasn’t fixed the automated systems – dubbed “classifiers” – that allowed that content to be published in the first place, per the report.

Facebook did not immediately respond to a request for comment.

Spokesman Andy Stone told the Journal that “in countries at risk for conflict and violence, we have a comprehensive strategy, including relying on global teams with native speakers covering over 50 languages, educational resources, and partnerships with local experts and third-party fact checkers to keep people safe.”

The issue is reminiscent of what Facebook acknowledged as a lack of action against groups targeting the minority Rohingya group, victims of ethnic cleansing, in Myanmar in 2018.

Another example was when Facebook employees said the company’s removal of posts that included the hashtag al-Aqsa, a mosque in Jerusalem that is the 3rd holiest Islamic site, in May was “entirely unacceptable.” The company had cracked down on the name because of a Palestinian militant coalition, the Al-Aqsa Martyrs’ Brigades, which has been labeled a terrorist organization by the US and EU.

One employee said the company used both human and automated moderating systems and should have consulted with experts knowledgeable about the Palestinian-Israeli conflict, Buzzfeed reported.

Read the full report on The Wall Street Journal here.

Read the original article on Business Insider