Mark Zuckerberg said policing bullying is hard when the content is ‘not clearly illegal’ – in 44 states, cyberbullying can bring criminal sanctions

mark zuckerberg facebook
Mark Zuckerberg at the 56th Munich Security Conference in February 2020.

  • US Rep. Fred Upton asked Mark Zuckerberg what Facebook was doing to stop bullying.
  • Zuckerberg said the site has trouble moderating that content it because it’s “not clearly illegal.”
  • 48 states have laws against online harassment and bullying.
  • See more stories on Insider’s business page.

Facebook CEO Mark Zuckerberg said it has been difficult for his social media network to police cyberbullying content, after a representative called him out for a lack of moderation on Facebook during a misinformation hearing on Thursday.

US Representative (R) Fred Upton made a reference to the Boulder Shooting on Monday, saying there was a lot of speculation the shooter had been bullied online. He asked Zuckerberg what Facebook was doing to stop bullying on its platform.

“It’s horrible and we need to fight it and we have policies that are against it, but it also is often the case that bullying content is not clearly illegal,” Zuckerberg said during the hearing.

48 states have laws against online harassment, which includes cyberbullying, according to data from a cyberbullying research site. 44 of the states also include criminal sanctions against online bullying and harassment, the research shows.

Read more: Facebook says it removed more than 1.3 billion fake accounts in the months surrounding the 2020 election

During the hearing, Zuckerberg presented several changes that could be made to internet legislation in the US, including increased transparency for platforms like Facebook, standards for addressing illegal content like cyberbullying on social media, as well as laws protecting smaller social media platforms from lawsuits and heavy regulations.

“When I was starting Facebook, if we had been hit with a lot of lawsuits around content, it might have been prohibitive for me getting started,” Zuckerberg said.

The purpose of Thursday’s hearing was to address the role of tech companies like Google, Facebook, and Twitter in the spread of misinformation – in particular false data on the coronavirus, the US election, and Capitol Siege.

The sites were identified as a primary source of information for insurrectionists leading up to the attack on the Capitol. Many people that stormed the Capitol organized on websites like Facebook in the weeks leading up to the siege.

Experts have also said that Facebook and Twitter should be held accountable for their hands-off approach on content moderation, as well as even potentially profiting off the spread of misinformation on the sites.

Read the original article on Business Insider

The CEOs of Google, Facebook, and Twitter are about to appear before Congress in a misinformation hearing. Here’s why the execs are testifying.

Mark zuckerberg jack dorsey sundar pichai
Left to right: Facebook CEO Mark Zuckerberg, Twitter CEO Jack Dorsey, and Google CEO Sundar Pichai.

  • The CEOs of Facebook, Google, and Twitter will testify in front of Congress on Thursday at noon ET.
  • The joint hearing was scheduled to discuss how misinformation spreads on these online platforms.
  • Tech firms have faced pressure throughout the pandemic, the 2020 election, and the Capitol siege.
  • See more stories on Insider’s business page.

Tech’s biggest figures will once again appear before Congress today.

Google CEO Sundar Pichai, Twitter CEO Jack Dorsey, and Facebook CEO Mark Zuckerberg will face questioning from two Senate subcommittees and the Energy and Commerce Committee – all chaired by Democratic lawmakers – over the companies’ role in the proliferation of misinformation online.

The virtual joint hearing was announced in February, over a month after pro-Trump extremists that stormed the US Capitol were found to have organized on social media platforms weeks in advance. The rioters were supporters of the “Stop the Steal” campaign, which purported that the 2020 presidential election was stolen from former President Donald Trump. President Joe Biden was sworn into office on January 20.

Trump himself also used these platforms to spread baseless claims of election fraud. He did so while his supporters were breaching the federal building on January 6.

“For far too long, big tech has failed to acknowledge the role they’ve played in fomenting and elevating blatantly false information to its online audiences,” the committee chairs said in February. “Industry self-regulation has failed. We must begin the work of changing incentives driving social media companies to allow and even promote misinformation and disinformation.”

Online social platforms have faced mounting pressure to police false information since the onset of the pandemic, as users were able to spread misleading facts pertaining to COVID-19. That pressure was compounded in the weeks surrounding the 2020 presidential election in November. Zuckerberg and Dorsey testified in front of the Senate that month over how they choose to moderate content on their platforms.

The January 6 storming of the US Capitol was another significant milestone that brought more scrutiny of how tech platforms allow disinformation to spread. Experts told Insider in January that Facebook and Twitter are “indirectly involved” in the US Capitol siege since the platforms’ laissez-faire approach to content moderation gave the far-right a place to congregate for years.

Companies made unprecedented moves following the insurrection – Facebook banned him until at least January 20 and continues to bar him from the site while the company’s “supreme court” considers the case. Twitter, Trump’s longtime favorite mouthpiece, also permanently suspended his account and has said he will remain banned even if he decides to run for office again.

Read more: Trump’s Twitter had the whole world on edge. Here’s how the Biden White House plans to make @POTUS sane again.

In a Monday blog post, Facebook’s vice president of integrity Guy Rosen said the company has removed millions of pieces of content containing misinformation surrounding the COVID-19 pandemic and vaccines as well as 1.3 billion fake accounts.

“Despite all of these efforts, there are some who believe that we have a financial interest in turning a blind eye to misinformation,” Rosen said in the post. “The opposite is true. We have every motivation to keep misinformation off of our apps and we’ve taken many steps to do so at the expense of user growth and engagement.”

The hearing is called “Disinformation Nation: Social Media’s Role in Promoting Extremism and Misinformation.” You can watch it via live stream here on Thursday starting at noon ET.

Read the original article on Business Insider