- Facebook and Google are failing to clamp down on reports of fraudulent ads, new research shows.
- Consumer group Which? called for new laws to force the tech giants to monitor fraud more closely.
- Both firms said scam ads were not allowed on their platforms and would take action where necessary.
- See more stories on Insider’s business page.
Facebook and Google are failing to crack down on fraudulent ads placed on their platforms, even after users have reported them, new research shows.
Online fraud has skyrocketed over the past year, as scammers capitalized on the widespread lockdowns that have kept people indoors all over the world.
Tech giants have come under pressure to take action against nefarious actors with past reports accusing Facebook of having a “lax approach” to the issue, while criminals continue to set up fake Google ads in a matter of hours.
More than a third (34%) of people that reported a scam ad to Google said it was not taken down while just over a quarter (26%) said the same had happened with Facebook, according to a study published by British consumer group Which?.
Which gave examples of scammers posting fake ads for discounts at established shoe retailers like Clarks or Russell and Bromley, using their logos and branding. These ads lead to look-a-like websites that steal consumers’ financial details. One victim said she paid £85 for a pair of boots, but instead received a pair of cheap sunglasses.
The study, of 2,000 adults in the UK, found that while Google was worse at reacting to reported scams, victims were more likely to encounter a fraudulent ad on Facebook in the first place.
Around 27% said they had come up against a scam ad on Facebook compared to 19% on Google.
Adam French, a consumer rights expert at Which?, said the findings showed both Facebook and Google had left their users “worryingly exposed to scams,” and suggested the UK government bring in legislation to root out the problem.
“Online platforms must be given a legal responsibility to identify, remove and prevent fake and fraudulent content on their sites,” he said. “The case for including scams in the Online Safety Bill is overwhelming and the government needs to act now.”
As part of the British government’s proposed Online Safety Bill, tech companies that allow users to post their own material or talk to others online could be fined up to £18 million (around $25 million) or 10% of their annual revenue, whichever is higher, for failing to remove “harmful” content.
The Bill is expected to contain extra provisions for the biggest social media companies with “high-risk features,” expected to include Facebook, TikTok, Instagram and Twitter.
A Facebook spokesperson told Insider fraudulent activity was “not allowed” on its platform, adding that the company had taken action against a number of the scam pages reported.
“Our 35,000 strong team of safety and security experts work alongside sophisticated AI to proactively identify and remove this content, and we urge people to report any suspicious activity to us,” they said.
A Google spokesperson said the company had previously removed more than 3.1 billion scam ads for violating its policies. “We take action on potentially bad ads reported to us and these complaints are always manually reviewed,” they said.
Insider previously uncovered scammers promising investors “huge returns” in a phony cryptocurrency scheme, while using fake quotes from Elon Musk and Daniel Craig while advertising on YouTube.
Are you a current or former Googler with more to share? You can contact this reporter securely using the encrypted messaging app Signal (+447801985586) or email (email@example.com). Reach out using a non-work device.