Mark Zuckerberg said policing bullying is hard when the content is ‘not clearly illegal’ – in 44 states, cyberbullying can bring criminal sanctions

mark zuckerberg facebook
Mark Zuckerberg at the 56th Munich Security Conference in February 2020.

  • US Rep. Fred Upton asked Mark Zuckerberg what Facebook was doing to stop bullying.
  • Zuckerberg said the site has trouble moderating that content it because it’s “not clearly illegal.”
  • 48 states have laws against online harassment and bullying.
  • See more stories on Insider’s business page.

Facebook CEO Mark Zuckerberg said it has been difficult for his social media network to police cyberbullying content, after a representative called him out for a lack of moderation on Facebook during a misinformation hearing on Thursday.

US Representative (R) Fred Upton made a reference to the Boulder Shooting on Monday, saying there was a lot of speculation the shooter had been bullied online. He asked Zuckerberg what Facebook was doing to stop bullying on its platform.

“It’s horrible and we need to fight it and we have policies that are against it, but it also is often the case that bullying content is not clearly illegal,” Zuckerberg said during the hearing.

48 states have laws against online harassment, which includes cyberbullying, according to data from a cyberbullying research site. 44 of the states also include criminal sanctions against online bullying and harassment, the research shows.

Read more: Facebook says it removed more than 1.3 billion fake accounts in the months surrounding the 2020 election

During the hearing, Zuckerberg presented several changes that could be made to internet legislation in the US, including increased transparency for platforms like Facebook, standards for addressing illegal content like cyberbullying on social media, as well as laws protecting smaller social media platforms from lawsuits and heavy regulations.

“When I was starting Facebook, if we had been hit with a lot of lawsuits around content, it might have been prohibitive for me getting started,” Zuckerberg said.

The purpose of Thursday’s hearing was to address the role of tech companies like Google, Facebook, and Twitter in the spread of misinformation – in particular false data on the coronavirus, the US election, and Capitol Siege.

The sites were identified as a primary source of information for insurrectionists leading up to the attack on the Capitol. Many people that stormed the Capitol organized on websites like Facebook in the weeks leading up to the siege.

Experts have also said that Facebook and Twitter should be held accountable for their hands-off approach on content moderation, as well as even potentially profiting off the spread of misinformation on the sites.

Read the original article on Business Insider

Facebook is building an instagram app for kids under 13, led by the former head of YouTube Kids

Instagram app
The log of the Instagram app on a smartphone.

  • Facebook is building an Instagram app for kids under 13, BuzzFeed News reported Thursday.
  • The project will be led by Pavni Diwanji, who previously led YouTube’s kid-focused products.
  • Facebook has faced backlash over the safety and mental health impacts of such products.
  • See more stories on Insider’s business page.

Facebook-owned Instagram is planning to build a version of its app targeted specifically toward children under 13, BuzzFeed News reported Thursday.

“We have identified youth work as a priority for Instagram and have added it to our H1 priority list, Instagram vice president of product Vishal Shah said in an internal memo, according to BuzzFeed.

“We will be building a new youth pillar within the Community Product Group to focus on two things: (a) accelerating our integrity and privacy work to ensure the safest possible experience for teens and (b) building a version of Instagram that allows people under the age of 13 to safely use Instagram for the first time,” Shah added, according to BuzzFeed.

Currently, Instagram policies prohibit children under 13 from using the app, though a parent or manager can manage an account on their behalf.

BuzzFeed News reported the kid-focused version will be overseen by Instagram head Adam Mosseri and led by Pavni Diwanji, a Facebook vice president who previously led YouTube Kids and other child-focused products at the Google subsidiary.

“Increasingly kids are asking their parents if they can join apps that help them keep up with their friends. Right now there aren’t many options for parents, so we’re working on building additional products – like we did with Messenger Kids – that are suitable for kids, managed by parents,” a Facebook spokesperson told Insider in a statement.

We’re exploring bringing a parent-controlled experience to Instagram to help kids keep up with their friends, discover new hobbies and interests, and more,” they added.

But Facebook’s push to draw young children into its app ecosystem is likely to draw scrutiny given its track record on privacy, preventing abuse and harassment, and scandals involving its Messenger Kids app.

BuzzFeed’s report comes just days after Instagram published a blog post introducing new child safety features, including AI-powered tools to guess users’ ages – despite acknowledging “verifying people’s age online is complex and something many in our industry are grappling with.”

Facebook’s stepped-up efforts to protect children follow years of reports that rampant bullying, child sex abuse material, and child exploitation exists on its platform, and some research suggests the problem may be getting worse.

A November report by the UK-based National Society for the Prevention of Cruelty to Children found Instagram was the most widely used platform in child grooming cases in the early months of the pandemic, being used in 37% of cases, up from 29% in the past three years. The US-based National Center for Missing and Exploited Children said in 2020, Facebook and its family of apps reported 20.3 million instances of possible child abuse on their platforms.

Facebook said in January that its AI systems “proactively” catch 99% of child exploitation content before it’s reported by users or researchers – however, that number doesn’t account for content that goes unreported.

In 2019, a privacy flaw in Facebook’s Messenger Kids app also allowed thousands of children to enter chats with strangers, and Facebook secretly built an app that paid teens to give it extensive access to their phone and internet usage data, before Apple forced Facebook to shutter the app for violating its App Store policies.

That same year, the Federal Trade Commission hit Facebook with a $5 billion fine over privacy violations – though privacy advocates have argued that it did little to prevent Facebook from scooping up user data.

Other tech platforms have had missteps as well when it comes to protecting children’s privacy online. Google reached a $170 million settlement with the FTC to settle allegations that YouTube illegally collected kids’ data without their parents consent. In September, a British researcher filed a $3 billion lawsuit against YouTube, alleging it illegally showed “addictive” content at children under the age of 13 and harvested their data for targeted ads.

Read the original article on Business Insider