Parler is back on the App Store with a ‘PG’ version that only cracks down on hate speech on Apple devices

This illustration picture shows social media application logo from Parler displayed on a smartphone with its website in the background.

  • Parler returned to the App Store on Monday after it had been kicked off in January.
  • On Apple devices any posts that are identified as hate speech will not be visible.
  • The company’s chief policy officer said it will be like a “PG” version of Parler.
  • See more stories on Insider’s business page.

Parler returned to Apple’s App Store on Monday after it had been kicked off following the January 6 Capitol Siege.

Apple announced last month that it had approved several changes to the app related to hate speech. Upon its return, Parler will look different – at least on Apple devices. While the Parler website allows any legal content to be viewed, the App Store version includes “enhanced threat-and-incitement reporting tools,” according to the listing on the App Store.

That means that posts identified as participating in hate speech will be removed from Apple devices, while the same posts labeled as “hate” will still be visible on Parler’s website.

Parler’s interim CEO Mark Meckler told Insider in a statement that the site worked to meet Apple’s standards, while maintaining its focus on free speech.

“The entire Parler team has worked hard to address Apple’s concerns without compromising our core mission,” Meckler said. “Anything allowed on the Parler network but not in the iOS app will remain accessible through our web-based and Android versions. This is a win-win for Parler, its users, and free speech.”

Parler’s chief policy officer, Amy Peikoff, told The Washington Post that the company is pressing Apple to allow the content to remain on the app, but with a warning label. Apple had listed banning the content as one of its conditions for allowing the application back on its store.

Peikoff told The Washington Post the milder version of Parler that is on Apple devices could be called “Parler Lite or Parler PG.”

“Where Parler is different [from Apple], is where content is legal, we prefer to put the tools in the hands of users to decide what ends up in their feeds,” she said.

In the past, the social-media app has avoided censoring its content, identifying itself as a “free speech” alternative to Twitter. The app tried to return to Apple devices in February but was blocked by the company. Apple cited several examples of hate speech, including Nazi symbols, in its decision to not allow the app to return.

Parler was removed from the App Store in January – at the time it was the most downloaded app on the store – after numerous Capitol rioters used the site to organize the insurrection at the Capitol. Following the Capitol insurrection, other web providers including Google Cloud and Amazon Web Services banned Parler.

Parler’s website was restored when SkySilk began hosting it in February, but it has yet to return to the Google Play store. Apple and Google spokespeople were not immediately available to comment.

Read the original article on Business Insider

A Muslim advocacy group just sued Facebook for failing to remove hate-speech, and it’s the latest example of the tech’s patchwork polices that fail to crack down on Islamophobia

Mark Zuckerberg sheryl sandberg
Muslim Advocates sued Mark Zuckerberg and Sheryl Sandberg for allegedly misleading Congress on how adequately they remove hate speech.

  • An advocacy group sued Facebook for allegedly misleading Congress regarding hate speech moderation.
  • The suit claims Facebook failed to remove most anti-Muslim groups presented to them in 2017.
  • The Muslim Advocates suit underscores how tech platforms fail to moderate anti-Muslim content.
  • See more stories on Insider’s business page.

A Muslim advocacy group this week sued Facebook for failing to curtail hate speech, part of tech’s broader problem stopping Islamophobic speech.

Civil rights group Muslim Advocates filed a suit against Facebook and four company executives in the District of Columbia Superior Court for lying to Congress about moderating hate speech.

Facebook executives have told Congress of their commitment to removing content that violate policies, including COO Sheryl Sandberg’s assertion to the Senate Intelligence Committee on Facebook and Foreign Influence that “when content violates our policies, we will take it down.”

Yet Muslim Advocates said the organization presented Facebook with a list of 26 groups that spread anti-Muslim hate in 2017, yet 19 of them are still active.

Read more: In a hopeful sign for tech diversity, Harlem Capital raised $134 million for its second fund, blowing by its target in just 6 months

The suit claims Facebook allowed a man threatening to kill Congresswoman Ilhan Omar to post “violent and racist content for years,” and that the company failed to remove a group called “Death to Murdering Islamic Muslim Cult Members” even after Elon University Professor Megan Squire brought it to Facebook’s attention.

“We do not allow hate speech on Facebook and regularly work with experts, non-profits, and stakeholders to help make sure Facebook is a safe place for everyone, recognizing anti-Muslim rhetoric can take different forms,” a Facebook spokesperson said in a statement to Insider. “We have invested in AI technologies to take down hate speech, and we proactively detect 97 percent of what we remove.”

In 2018, Facebook CEO Mark Zuckerberg testified to Congress that the platform’s can fail to police hate speech due to its artificial intelligence. Hate speech has nuance that can be tricky for AI to identify and remove, especially in different languages, Zuckerberg said.

Zuckerberg once again addressed questions about moderation and automation at a March 2021 congressional hearing about misinformation. His testimony about how content moderation needs to take into consideration “nuances,” like when advocates make counterarguments against hateful hashtags, seemed at odds with Facebook’s admitted reliance on AI to do the job of moderating hate speech.

Peter Romer-Friedman, a principal at Gupta Wessler PLLC who helped file the suit and the former counsel to Sen. Edward M. Kennedy, said Congress cannot adequately oversee corporations that misrepresent facts to lawmakers.

Romer-Friedman said Facebook’s failure to remove a group that claimed “Islam is a disease” – which directly violates the company’s hate speech policies that prohibits “dehumanizing speech including…reference or comparison to filth, bacteria, disease, or feces” – is an example where the firm did not follow through on its promise to Congress to quell hate speech.

“It’s become all too common for corporate execs to come to Washington and not tell the truth, and that harms the ability of Congress to understand the problem and fairly regulate businesses that are inherently unsafe,” Romer-Friedman told Insider.

How Facebook and other tech firms are failing to address anti-Muslim hate speech

The suit highlight’s tech firms’ ongoing problem responding to anti-Muslim content online.

Rep. Omar, the first congressperson to wear a hijab or Muslim headscarf, called on Twitter to address the death threats she receives. “Yo @Twitter this is unacceptable!” she said in 2019.

An analysis by the Social Science Research Council analyzed more than 100,000 tweets directed at Muslim candidates running for office in 2018, and found Twitter was responsible “for the spread of images and words from a small number of influential voices to a national and international audience,” per The Washington Post.

The spread of anti-Muslim content extends far beyond Facebook and Twitter:

  • TikTok apologized to a 17-year-old user for suspending her account condemning China’s mass crackdown on Uighur Muslims.
  • VICE has reported Muslim prayer apps like Muslim Pro had been selling location data on users to the US military.
  • Instagram banned Laura Loomer, a “proud Islamophobe,” years after Uber and Lyft banned her for a series of anti-Muslim tweets following a terror attack in New York.

Sanaa Ansari, a staff attorney with Muslim Advocates, said there’s been “clear evidence” of incitement to violence against Muslims potentially due to unchecked hate speech on social media. In 2019, 16-minute livestream of a gunman attacking two mosques and killing 51 people in New Zealand was uploaded to Facebook and spread quickly to Youtube, Instagram, and Twitter.

“There have been multiple calls to arms to Muslims, there have been organized events by anti-Muslim supremacists and militias who have organized marches, protests at mosques in this country,” Ansari told Insider in an interview. “And that’s just the tip of the iceberg.”

Read the original article on Business Insider

PayPal suspends account of neo-Nazi who was using the site to sell hate speech

GettyImages 1154091538
PayPal suspended a neo-Nazi user after anti-fascist activists noted he was selling books using the company’s service.

An avowed white supremacist will have to find another way to sell his racist tracts after PayPal suspended his account on Tuesday.

Billy Roper is a third-generation white supremacist, according to the Southern Poverty Law Center; his father and grandfather were both members of the Ku Klux Klan. The SPLC describes Roper, based in Arkansas, as “the uncensored voice of violent neo-Nazism.”

“I’m a biological racist,” he said in a 2003 essay published in a neo-Nazi newsletter, per the SPLC. “Every non-white on the planet has to become extinct,” he added in a 2005 radio interview. He also praised the September 11, 2001, terrorist attacks, admiring the “testicular fortitude” of al-Qaeda.

Though his views were well-documented, Roper was until this week selling his latest collection of hate speech, purporting to be a guide to surviving “the future breakup of America” into racial enclaves, on his own website, where he was accepting credit cards through PayPal – after Amazon and other online retailers had the 126-page screed removed from their platforms.

“We regularly assess activity against our Acceptable Use Policy and carefully review actions reported to us, and will discontinue our relationship with account holders who are found to violate our policy,” a company spokesperson said after Insider asked about his use of the service. PayPal’s policy states that users may not promote “hate, violence, racial or other forms of intolerance.”

On Twitter, the Colorado Springs Anti-Fascists group has been calling on supporters to contact PayPal and other companies and bring to attention their roles in facilitating the spread of racist propaganda.

“Billy Roper is a well known neo-Nazi leader, so he has been on our radar for years,” an activist from the group told Insider. “Cutting off funding to white supremacist organizations and figureheads makes their recruitment and propaganda efforts more difficult.”

It’s not the first time that PayPal has acted against right-wing extremists after activists pointed out they were exploiting its platform. In 2019, it suspended an account being used to fundraise for the KKK after a co-founder of the anti-racist group Sleeping Giants highlighted it on social media.

Have a news tip? Email this reporter:

Read the original article on Business Insider

Facebook banned Holocaust denial from its platform in October. Anti-hate groups now want the social media giant to block posts denying the Armenian genocide.

facebook mark zuckerberg
Facebook CEO Mark Zuckerberg leaving The Merrion Hotel in Dublin after a meeting with politicians to discuss regulation of social media and harmful content in April 2019.

  • In October, Facebook announced changes to its hate speech policy and insituted a ban on posts denying the Holocaust. 
  • However, the ban did not include the denial of other genocides, such as the Rwandan or Armenian genocides.
  • Now, advocates are calling for Facebook to ban posts denying the Armenian genocide, too.
  • From 1915 to 1923, the Ottoman Empire killed 1.5 million Armenians and expelled another half a million. Turkey still falsely claims that the genocide never happened.
  • Visit Business Insider’s homepage for more stories.

Anti-hate advocates are calling on Facebook to ban posts denying the Armenian genocide, which led to the deaths of over 1.5 million ethnic Armenians, saying the social media giant’s policy on hate speech fails to address recent crimes against humanity.

The call to action follows Facebook’s October announcement that it would ban posts denying the Holocaust, which came after pressure from human rights groups, Holocaust survivors, and a 500-plus company ad boycott. However, the change did not include the denial of other genocides, such as the Rwandan and Armenian genocides, Bloomberg reported.

“They have an obligation to responsibly address all genocide,” said Arda Haratunian, board member for the Armenian General Benevolent Union (AGBU), the largest non-profit dedicated to the international Armenian community.  “How could you not apply the same rules across crimes against humanity?”

Now, voices from across the Armenian diaspora and anti-hate groups are calling for the company to change its policy. In November, the Armenian Bar Association penned a letter to Facebook and Twitter (which banned posts denying the Holocaust in the days after Facebook did), proposing that they expand their ban to posts denying the Armenian genocide, too. 

“It made us hopeful, because it was a sign that Facebook is taking steps towards fixing its speech problem,” said Lana Akopyan, a lawyer specializing in intellectual property and technology, and member of the Armenian Bar Association’s social media task force. The Armenian Bar Association has yet to receive a response from either company, Akopyan told Business Insider.

The calls to expand hate speech policies come as social media platforms face a wider reckoning on how they regulate speech. Politicians on both sides of the aisle have criticized section 230 of the Communications Decency Act, a legal provision that shields internet companies from lawsuits over content posted on their sites by users and gives companies the ability to regulate that content. 

In recent years, Facebook has struggled with human rights issues on the platform. In 2018, a New York Times investigation found that Myanmar’s military officials systematically spread propaganda on Facebook to incite the ethnic cleansing of the country’s Muslim Rohingya minority population.  Since 2017, Myanmar’s military has been accused of carrying out a systemic campaign of killing, rape, and arson against Rohingyas, leading over 740,000 to flee for Bangladesh, according to the United Nations Human Rights Council. 

Facebook’s current hate speech policy prohibits posts that directly attack a protected group, including someone of a racial minority, certain sexual orientation or gender, or religion. But the platform lacks a cohesive response to other “harmful false beliefs,” like certain conspiracy theories, said Laura Edelson, a PhD candidate at NYU who researches online political communication. Rather than a systematic approach to harmful misinformation, Edelson likened Facebook’s strategy to a game of “whack-a-mole.” 

“You are allowed to say, currently, the Armenian genocide is a hoax and never happened,” said Edelson. “But you are not allowed to say you should die because you are an Armenian.”

From 1915 to 1923, the Ottoman Empire killed 1.5 Armenians and expelled another half a million. However, Turkey still falsely claims that the genocide never happened. 

“Holocaust denial is typically done by fringe groups, irrational entities. The denial of the Armenian genocide is being generated by governments… which makes it a far greater threat,” said Dr. Rouben Adalian, Director of the Armenian National Institute in Washington, D.C. 

It also makes enforcement a thorny issue for Facebook, since it may involve moderating the speech of political leaders.

“Facebook doesn’t want to wrangle with this issue, not because it’s technically difficult, because it isn’t, but because it is difficult at a policy level,” said Edelson. “There’s a government agent here, that you are going to have to make unhappy. In the case of the Armenian genocide, it’s the Turkish government.”

Facebook did not respond to Business Insider’s requests for comment. Twitter said hateful conduct has no place on its platform and its “Hateful Conduct Policy prohibits a wide range of behavior, including making references to violent events or types of violence where protected categories were the primary victims, or attempts to deny or diminish such events.” The company also has “a robust glorification of violence policy in place and take action against content that glorifies or praises historical acts of violence and genocide,”a spokesperson said. 

Yet online the falsehoods proliferate, advocates told Business Insider. On Facebook, the page “Armenian Genocide Lie” has thousands of followers, and screenshots of tweets shared with Business Insider show strings of identical posts that appear to be posted by bots, calling the Armenian genocide “fake.” 

And stateside, Armenians point to a string of hate crimes, including the arson of an Armenian church in September and the vandalism of an Armenian school in July, as evidence that anti-Armenian sentiment is a growing issue.

The calls for change come amid international conflict between Armenia and Azerbaijan over the region of  Nagorno-Karabakh in the South Caucasus, which is internationally recognized as part of Azerbaijan and is populated by many ethnic Armenians. War broke out in September. In November, Armenia surrendered and Russia brokered a peace deal. Tensions continue to flare in the area and videos of alleged war crimes have surfaced online.

“Facebook has a responsibility, first and foremost, to its users, to protect them against harmful misinformation. The idea that the Armenian genocide did not happen pretty clearly falls into that category,” said Edelson. 

The Anti-Defamation League (ADL), which successfully lobbied for social media companies to ban Holocaust denial, is also supporting the calls for change. 

“ADL believes that tech companies must take a firm stance against content regarding genocide and the denial or diminishment of other atrocities motivated by hate,” said an ADL spokesperson in a statement to Business Insider.  “Tech companies should, without doubt, consider denial of the Armenian genocide to be violative hate speech.”

Dr. Gregory Stanton, founding president of human rights nonprofit Genocide Watch, says that denial is a pernicious stage of genocide, since it seeks to erase the past and can predict future violence. 

“Denial occurs in every single genocide,” said Stanton. “I think it’s irresponsible…. with Facebook’s incredible reach, it absolutely should be taken down.” 

As for Akopyan, her fight to change Facebook’s policy is personal. Her family survived the Baku Pogroms in Azerbaijan, a campaign in 1990 in which Azeris killed ethnic Armenians and drove them from the city. Akopyan’s family left all their belongings behind and fled in the night, Akopyan said. The International Rescue Committee sponsored her family, and she relocated to Brooklyn, New York, at 10-years-old. 

“I grew up in that tension as a child, where Azerbaijani mobs tried to kill me and my family, and I escaped,” she said in an interview. “How many times [do] our people have to lose everything and be driven away from their homes to start over?” 

“And it continues to happen,” she added.  “I can’t help but think it’s because there’s constant denial of it ever happening to begin with.” 

Read the original article on Business Insider removes most Confederate merchandise, citing policy against hate, but it still sells items glorifying dictators

GettyImages 1129009698
The online retailer formally prohibits the sale of “hate symbols,” but many still slip through the cracks.

  • Before Christmas, visitors to the online retailer could purchase a number of items celebrating the Confederacy, from flags to shirts to hats.
  • After Insider pointed out the items last week, the company removed most but not all related merchandise.
  • “Wish prohibits the listing of products that glorify or endorse hatred towards others,” a company spokesperson said Monday.
  • But visitors can still purchase items that violate the company’s policy, including merchandise glorifying dictators Bashar al-Assad and Saddam Hussein.
  • Visit Business Insider’s homepage for more stories.

This holiday season, visitors to – the online retailer whose name is featured on jerseys worn by LeBron James and other members of the Los Angeles Lakers – were able to buy an item that is officially prohibited for promoting hate: the battle flag of the defeated Confederate States of America.

Paid ads on the site actually featured Mississippi’s recently scrapped state banner, which included the Confederate flag in the top left corner. In the ad, the symbol of the Confederacy, and the lost cause of chattel slavery, is featured prominently; hundreds of people purchased these items, according to the site.

Following an inquiry from Insider, most but not all of that merchandise has now been purged.

wish dot com
Confederate flags were available for purchase on throughout the holiday season.

“Wish prohibits the listing of products that glorify or endorse hatred towards others,” a company spokesperson said Monday, noting it “deploys a number of measures to prevent these types of listings and removes them if prevention was unsuccessful.”

Led by billionaire and former Google engineer Peter Szulczewski, the e-commerce site, akin to eBay and Amazon, brought in just under $2 billion in revenue in 2019. It also raised $1.1 billion when it went public on the stock exchange in November 2020.

Like its competitors, Wish has an explicit policy on “hateful symbols“: it does not allow them. Nazi memorabilia, the alt-right “Kekistan” flag, and “dictator glorification” are all expressly prohibited.

The company’s policy threatens to impose a $10 fine on those who sell prohibited items. The company spokesperson did not immediately respond when asked if the penalty had been imposed on those who listed the Confederate merchandise.

Although Wish has a clear policy against symbols of hate, enforcement is uneven.

In 2019, for example, Wish and Amazon were both forced to apologize after The Auschwitz Museum revealed that the sites were selling Christmas tree ornaments with photos of the concentration camp, as Wired reported.

Confederate merchandise also remains just one quick search away, including a “Confederate States Cavalry” flag and matching baseball cap, despite the renewed effort to clean up the site.

Screenshot_2020 12 28 Saddam Hussein Wish
Items celebrating the Confederacy continue to be listed at, despite a policy prohibiting their sale.

As of Monday night, visitors could also purchase t-shirts, hoodies, and face masks celebrating Bashar al-Assad, the Syrian strongman whom the US concluded used chemical weapons along with indiscriminate bombing campaigns that have killed hundreds of thousands and forced millions of other people to flee their homeland.

Screenshot_2020 12 28 Wish   Shopping Made Fun(1)
Numerous items featuring Syrian dictator Bashar al-Assad are available to purchase at

Users can also buy t-shirts and cell phone cases featuring Saddam Hussein, who over the summer of 2020 was featured in a seemingly algorithm-driven social media campaign from the company that highlighted a $20 framed photo of the deceased Iraqi dictator.

A spokesperson for the Los Angeles Lakers, which announced a corporate partnership with Wish in 2017, did not respond to emails requesting comment.

Have a news tip? Email this reporter:

Read the original article on Business Insider

Facebook reportedly hasn’t banned a violent religious extremist group in India because it fears for its business prospects and staff’s safety

Mark Zuckerberg Narendra Modi.JPG
Indian Prime Minister Narendra Modi (L) and Facebook CEO Mark Zuckerberg at Facebook’s headquarters in Menlo Park, California September 27, 2015.

  • Facebook’s safety team determined earlier this year that Bajrang Dal, a religious extremist group in India, was likely a “dangerous organization” that should be banned from the platform under its rules, The Wall Street Journal reported Sunday.
  • But, The Journal reported, Facebook became concerned about banning the group after its security team warned that doing so could lead to attacks against Facebook’s staff.
  • Facebook’s inconsistency in enforcing its rules in India has also been motivated by fears that backlash from India’s nationalist ruling party could hurt business, The Wall Street Journal previously reported.
  • The social media company has increasingly come under fire over its struggle to effectively and consistently police its platform — especially outside of the US, where users have leveraged its platform to facilitate ethnic violence, undermine democratic processes, and crack down on free speech.
  • Visit Business Insider’s homepage for more stories.

Facebook determined that a religious extremist group in India likely should be banned from the platform for promoting violence, but it has yet to take action because of concerns over its staff’s safety and political repercussions that could hurt its business, The Wall Street Journal reported Sunday.

Bajrang Dal, a militant Hindu nationalist group, has physically assaulted Muslims and Christians, and one of its leaders recently threatened violence against Hindus who attend church on Christmas.

Earlier this year, Facebook’s safety team determined that Bajrang Dal likely was a “dangerous organization” and, per its policies against such groups, should be removed from the platform entirely, according to The Journal.

But Facebook hesitated to enforce those rules after its security team concluded that doing so could hurt its business in India as well as potentially trigger physical attacks against its employees or facilities, The Journal reported.

“We ban individuals or entities after following a careful, rigorous, and multi-disciplinary process. We enforce our Dangerous Organizations and Individuals policy globally without regard to political position or party affiliation,” a Facebook company spokesperson told Business Insider.

According to the Journal, Facebook refused to say whether it ultimately decided to designate Bajrang Dal as not dangerous.

This isn’t the first time Facebook has faced criticism over how it has – or hasn’t – enforced its rules, even within India or even within India in the past few months.

The Journal reported in August that Facebook refused to apply its hate speech policies to T. Raja Singh, a politician from India’s nationalist ruling BJP party, despite his calls to shoot Muslim immigrants and threats to destroy mosques.

Facebook employees had concluded that, in addition to violating the company’s policies, Singh’s rhetoric in the real world was dangerous enough to merit kicking him off the platform entirely. However, Facebook’s top public policy executive in India overruled them, arguing that the political repercussions could hurt the company’s business (India is its largest and fastest-growing market globally by number of users).

The internal tension over Bajrang Dal reflects the frequent challenges Facebook faces when its profits come into conflict with local governments and laws, rules the company has established for its platform, and CEO Mark Zuckerberg’s pledges to uphold free speech and democratic processes.

In August, Facebook took the rare step of legal action against Thailand’s government over its demand that the company block users within the country from accessing a group critical of its king, though it’s complying with the government’s request while the case proceeds in court.

But BuzzFeed News reported in August that Facebook ignored or failed to quickly address dozens of incidents of political misinformation and efforts to undermine democracy around the world, particularly in smaller and non-Western countries.

And even as Zuckerberg has defended Facebook’s exemption of President Donald Trump and other politicians from its hate speech and fact-checking policies, human rights activists around the world have slammed the social media giant for refusing to protect the free speech of those not in power.

Read the original article on Business Insider