Tiktok’s new privacy policy lets it collect your biometric data, including ‘faceprints and voiceprints’

An iPhone user looks at the TikTok app on the Apple App Store in January 2021.

  • TikTok updated its privacy policy Wednesday, permitting the collection of US users’ biometric information.
  • The policy only vaguely promises to ask users for their consent, TechCrunch reported.
  • In February, TikTok paid $92 million to settle claims it violated Illinois’ biometric data privacy law.
  • See more stories on Insider’s business page.

TikTok rolled out major updates to its privacy policy on Wednesday, including adding a new section that allows the ByteDance-owned company to collect US users’ biometric information.

“We may collect biometric identifiers and biometric information as defined under US laws, such as faceprints and voiceprints, from your User Content. Where required by law, we will seek any required permissions from you prior to any such collection,” the new policy reads.

As noted by TechCrunch, which earlier reported on the changes, that language could allow TikTok the ability to collect most US users’ biometric data without explicitly asking them, due to the fact that only a few states have laws restricting companies from collecting such data.

TikTok didn’t respond to Insider’s questions about whether it had already begun collecting users’ biometric data. However, the new language is found within a section titled “information we collect automatically,” meaning TikTok could potentially be collecting it already.

TechCrunch also noted that the policy doesn’t define “faceprints” or “voiceprints,” or explain why TikTok needs this data in the first place.

In February, TikTok paid $92 million to settle a class-action lawsuit in Illinois over allegations that it violated the state’s biometric data privacy law.

Last year, the Trump administration unsuccessfully attempted to ban TikTok from the US entirely, claiming its ownership by Beijing-based ByteDance posed a national security threat.

While President Joe Biden on Thursday issued an executive order banning Americans from investing in Chinese firms linked to surveillance of religious and ethnic minorities, his administration hasn’t taken an explicit position on TikTok.

Most cybersecurity experts say that TikTok poses no more security risk to average Americans than any other social media app, though some US government agencies and politicians have banned employees from using the app.

Read the original article on Business Insider

‘Apple is eating our lunch’: Google employees admit in lawsuit that the company made it nearly impossible for users to keep their location private

Google New York Office
Google in Manhattan.

Newly unredacted documents in a lawsuit against Google reveal that the company’s own executives and engineers knew just how difficult the company had made it for smartphone users to keep their location data private.

Google continued collecting location data even when users turned off various location-sharing settings, made popular privacy settings harder to find, and even pressured LG and other phone makers into hiding settings precisely because users liked them, according to the documents.

Jack Menzel, a former vice president overseeing Google Maps, admitted during a deposition that the only way Google wouldn’t be able to figure out a user’s home and work locations is if that person intentionally threw Google off the trail by setting their home and work addresses as some other random locations.

Jen Chai, a Google senior product manager in charge of location services, didn’t know how the company’s complex web of privacy settings interacted with each other, according to the documents.

Google and LG did not respond to requests for comment on this story.

The documents are part of a lawsuit brought against Google by the Arizona attorney general’s office last year, which accused the company of illegally collecting location data from smartphone users even after they opted out.

A judge ordered new sections of the documents to be unredacted last week in response to a request by trade groups Digital Content Next and News Media Alliance, which argued that it was in the public’s interest to know and that Google was using its legal resources to suppress scrutiny of its data collection practices.

The unsealed versions of the documents paint an even more detailed picture of how Google obscured its data collection techniques, confusing not just its users but also its own employees.

Google uses a variety of avenues to collect user location data, according to the documents, including WiFi and even third-party apps not affiliated with Google, forcing users to share their data in order to use those apps or, in some cases, even connect their phones to WiFi.

“So there is no way to give a third party app your location and not Google?” one employee said, according to the documents, adding: “This doesn’t sound like something we would want on the front page of the [New York Times].”

When Google tested versions of its Android operating system that made privacy settings easier to find, users took advantage of them, which Google viewed as a “problem,” according to the documents. To solve that problem, Google then sought to bury those settings deeper within the settings menu.

Google also tried to convince smartphone makers to hide location settings “through active misrepresentations and/or concealment, suppression, or omission of facts” – that is, data Google had showing that users were using those settings – “in order to assuage [manufacturers’] privacy concerns.”

Google employees appeared to recognize that users were frustrated by the company’s aggressive data collection practices, potentially hurting its business.

“Fail #2: *I* should be able to get *my* location on *my* phone without sharing that information with Google,” one employee said.

“This may be how Apple is eating our lunch,” they added, saying Apple was “much more likely” to let users take advantage of location-based apps and services on their phones without sharing the data with Apple.

Read the original article on Business Insider

Signal said Facebook shut down its advertising account after the privacy-focused messaging app tried to buy Instagram ads showing how the social media giant collects data

signal ceo moxie marlinspike
Signal CEO Moxie Marlinspike.

  • Signal said it tried to buy Instagram ads that would show users how Facebook targets them.
  • The ads would display personal information about users that Facebook uses when targeting ads.
  • But Signal said Facebook responded by shutting its account down.
  • See more stories on Insider’s business page.

Facebook blocked ads that Signal wanted to buy that would show Instagram users the data that Facebook collects from them, according to the encrypted messaging company.

In a blog post entitled “The Instagram ads Facebook won’t show you,” Signal said the likes of Facebook are driven to collect people’s data to sell, and the company wanted to showcase how that technology works. So it tried to buy “multi-variant targeted” ads on Instagram “designed to show you the personal data that Facebook collects about you and sells access to.” Facebook responded by shutting down Signal’s account, the blog post said.

“Being transparent about how ads use people’s data is apparently enough to get banned; in Facebook’s world, the only acceptable usage is to hide what you’re doing from your audience,” the company wrote in its post.

Signal posted examples of what the ads would look like on its blog. One reads: “You got this ad because you’re a newlywed pilates instructor and you’re cartoon crazy. This ad used your location to see you’re in La Jolla. You’re into parenting blogs and thinking about LGBTQ adoption.”

CEO Moxie Marlinspike tweeted another example that shows how a user could be targeted with ads based on their job, location, dietary preferences, and fitness interests.

Signal and Facebook did not immediately respond to Insider’s requests for comment.

Facebook has taken down ads critical of the company before. In 2019, Democratic Sen. Elizabeth Warren, who was running for office at the time, ran ads that laid out her plan to split up Facebook as well as other big tech companies. Facebook said it blocked the ads because they violated its rules around using the company’s corporate logo but eventually reinstated them.

Facebook’s ad business relies upon data tracking to inform its algorithm that decides which ads to put in front of online users, and it’s lucrative: It bolstered the social media giant’s Q1 revenue to $26.17 billion, up 48% from this time last year. The company attributed the rise to an increase in the average price per ad as well as the number of ad impressions.

Facebook has been vocal about its ad business being at risk thanks to a new privacy update that Apple has rolled out. The latest iOS update includes the company’s “App Tracking Transparency” feature that forces app developers to ask for permission to collect and track users’ data. Facebook has argued that the new feature will hurt small businesses that rely on personalized ads.

Read more: The battle between Facebook and Apple over privacy is about more than just ads – it’s about the future of how we interact with tech

Facebook’s WhatsApp also announced a controversial change to its terms of service earlier this year that would have forced users to share personal data with its parent company. WhatsApp said the move was to let businesses store chats using Facebook’s infrastructure.

Critics, including Tesla CEO Elon Musk, suggested that users switch to using Signal or Telegram.

Read the original article on Business Insider

Google is facing a lawsuit after a privacy flaw in its contact tracing tech exposed Android users’ data to third-party apps

Sundar Pichai
Alphabet CEO Sundar Pichai.

  • Google is facing a lawsuit after a bug in its contact tracing tech reportedly exposed user data.
  • Google and Apple developed a system that helps health officials alert people exposed to COVID-19.
  • An analysis found Google knew user data was exposed and failed to inform the public.
  • See more stories on Insider’s business page.

Google is facing a lawsuit after a privacy vulnerability in its contact tracing system left users’ data exposed.

Google was alerted in February that users’ sensitive data was exposed to third-party apps that were already installed on their mobile devices and that it failed to inform the public, according to the analysis company AppCensus and a report from The Markup. Google told The Verge that it was finding a solution to the privacy flaw.

The lawsuit, filed by two users affected in a US district court in California, is demanding that Google fix the security problem and be held accountable for “damages and restitution.”

The Google Apple Exposure Notification System, or GAEN, is a digital contract tracing system developed by Apple and Google designed to use Bluetooth signals to alert users if they had come in contact with someone who has tested positive for COVID-19.

“Because Google’s implementation of GAEN allows this sensitive contact tracing data to be placed on a device’s system logs and provides dozens or even hundreds of third parties access to these system logs, Google has exposed GAEN participants’ private personal and medical information associated with contact tracing, including notifications to Android device users of their potential exposure to COVID-19,” the lawsuit reads.

Apple and Google did not immediately respond to Insider’s request for comment.

Read more: How Deutsche Bank is using contact tracing to bring its 90,000 employees safely back to work

In May 2020, the technology was made available to public health officials around the world to integrate with government health apps and verify and log a user’s contact status. Experts told Insider’s Aaron Holmes last April that the system would only be useful if Apple and Google could recruit enough people to use it.

The technology first launched in August as part of a state-wide initiative in Virginia. According to the filing, more than 28 million people across 25 states, as well as Guam and Washington, DC, have used contact tracing apps equipped with Google and Apple’s technology or have activated notifications on their phones to alert them if they were exposed.

Contact tracing apps that use the technology run on both Google’s Android and Apple’s iOS operating systems. The analysis that AppCensus conducted did not find the flaw on iPhones running the software, according to The Verge.

Read the original article on Business Insider

Consumer Reports says Tesla’s cameras inside its cars, which transmit video footage of passengers, could pose a privacy risk

Elon Musk
  • Several Tesla models record and transmit video of drivers and passengers via in-car cameras.
  • Tesla cars have up to 9 cameras encompassing both the outside and inside of the car.
  • Consumer Reports said the in-car camera opens drivers up to serious privacy concerns.
  • See more stories on Insider’s business page.

Several Tesla vehicle models, including the Model 3 and Model Y, record and transmit video footage of drivers and passengers via in-car cameras. The cameras are designed to help Tesla develop its full-self driving software, but present a serious privacy risk, according to Consumer Reports.

John Davisson, senior counsel at the Electronic Privacy Information Center (EPIC), told Consumer Reports the footage opens Tesla drivers up to a whole host of privacy concerns, including the potential for outside parties to gain access to the data for malicious purposes, as well as Tesla itself using the data for its own gain.

“It may later be repurposed for a system that is designed to track the behaviors of the driver, potentially for other business purposes,” Davisson told Consumer Reports.

Jake Fisher, senior director of Consumer Reports’ auto-test center, told Insider the most concerning aspect of their investigation into the cameras was that Tesla was not being entirely transparent about how the cameras were being used.

“Tesla could be using these cameras to stop crashes and they’re using it for studies, to help Tesla develop more things,” Fisher told Insider. “Tesla is the only automaker that has hardware that could help stop crashes, but isn’t using it for the driver’s safety.”

Tesla did not immediately respond to a request for comment on the report.

Tesla CEO Elon Musk said on Twitter that Tesla has used the in-car cameras to remove its full self-driving software from drivers that “did not pay sufficient attention to the road.”

Musk confirmed the company was using the in-car cameras to determine eligibility for the FSD software, when asked by another Twitter user.

Other car companies, including BMW, Ford and General Motors have elaborate driver monitoring systems, but they have focused the systems on driver safety over collecting data. Consumer Reports notes the car companies do not record, save or transmit the data and use infrared technology to identify a driver’s eye movements or head position instead of video cameras.

While Tesla does not use the in-car cameras to alert the driver to potential safety concerns, the company does use a real-time driver-engagement tool via steering wheel inputs that analyze the amount of pressure put on the wheel to keep drivers alert.

Consumer Reports said the steering wheel inputs can be easily tricked. “Just because a driver’s hands are on the wheel doesn’t mean their attention is on the road,” said Kelly Funkhouser, program manager for vehicle interface testing at Consumer Reports. Fisher told Insider in-car cameras could help save a lot of lives.

Tesla drivers can opt-out of sharing the in-car videos via their control settings and the Cabin Camera is disabled by default. According to Tesla’s site, the camera will only turn on before a crash or automatic emergency braking (AEB) activation.

China has also expressed concern regarding cameras on Tesla cars. In March, China banned Tesla cars in military complexes due to concerns about the company monitoring drivers via the car’s cameras.

In response, Musk said on Twitter that the company would be shut down if it was spying on Chinese officials.

While Tesla’s Model 3 made the Consumer Reports’ “Top Picks” list last year, the publication removed its recommendation for the Model S, citing issues with its suspension and electronics. Consumer Reports also criticized Tesla’s Model Y in November for body hardware and paint issues.

Read the original article on Business Insider

What is Global Privacy Control? How organizations are teaming up to prevent your personal data from being sold

Hand typing on computer
The Global Privacy Control feature is a setting in some browsers and plug-ins designed to protect you against websites selling your personal data.

  • The Global Privacy Control (GPC) feature is a setting in some browsers and plug-ins to tell websites not to sell your personal data.
  • GPC is found in a small number of browsers and plug-ins, and compliance is optional.
  • The GPC is being developed by a consortium of tech companies and publishers.
  • Visit Insider’s Tech Reference library for more stories.

The Global Privacy Control (GPC) is a technology initiative being spearheaded by a group of publishers and technology companies to create a global setting in web browsers that allows users to control their privacy online. This means you should be able to set the GPC control in your browser to prevent websites from selling your personal data.

Why the Global Privacy Control feature is important

In recent years, there has been increasing scrutiny on privacy rights online. In 2018, the European Union’s General Data Protection Regulation (GDPR) went into effect, limiting the data websites can collect on EU citizens. The California Consumer Privacy Act (CCPA) is a similar legislative measure that went into effect in California in 2020.

While there is enhanced interest in online privacy and some governments are taking steps to limit what websites can do with user data, there is no global way for users to opt-out of having their personal information sold or used in ways they don’t approve of. Every website that needs to comply with legal mandates – or simply implement more progressive privacy policies – must implement an opt-out mechanism on its own.

The GPC is built to inform websites not to sell user data. This is different from other privacy tools that might limit tracking but might still allow user data to be sold (or to sell that data itself).

What is the Global Privacy Control feature 1
Some organizations offer the ability for users to opt-in to their privacy control feature.

When fully implemented, the GPC may allow you to opt-out of having your personal data sold by the websites you visit.

Status of the Global Privacy Control feature

Buoyed by these new laws, the GPC is intended to be a single, global setting users can activate in their web browser that signals to all websites the user’s intention about their data privacy.

Currently, the specification is being written by an informal consortium of more than a dozen organizations including the Electronic Frontier Foundation (EFF), the National Science Foundation, The New York Times, Mozilla, The Washington Post, and Consumer Reports.

The specification that will govern how the GPC will be implemented and behave is still in development, though in principle, it simply allows a website to read a value (such as Sec-GPC-field-value = “1”) to know that the user has chosen to opt-out of having their data sold.

A number of web browsers and browser extensions have implemented the GPC in its draft form. Moreover, adoption of the GPC privacy settings carries no legal weight. If you use a browser or extension with the GPC feature, at this time no websites are obligated to respect its setting – compliance with the GPC is voluntary.

How to change your privacy settings on WhatsApp and protect yourselfHow to adjust your privacy settings on Signal, and protect your messages with extra encryption and face scansWhat are Apple’s Privacy Nutrition Labels? Here’s what you need to know about the new App Store feature that prioritizes user privacyHow to opt out of website cookies with WebChoices and avoid targeted ads

Read the original article on Business Insider

WhatsApp’s new T&Cs didn’t really change anything about sharing your data with Facebook, but you should still use Signal if you care about privacy

whatsapp users go to signal 4x3
  • WhatsApp caused a user stampede to rival encrypted messaging app Signal by sending users new terms and conditions.
  • Users were panicked by the notification WhatsApp sent out, thinking it meant the app would share more data with Facebook, its parent company.
  • In fact, WhatsApp was already sharing their data with Facebook — all the notification did was draw attention to it.
  • Visit Business Insider’s homepage for more stories.

On January 6, WhatsApp caused a user stampede.

The app sent users a notification asking them to sign off on updated terms and conditions, which stipulated it could share reams of metadata – including their phone numbers, locations, and contacts – with its parent company Facebook. If users did not consent, the notification said, they would lose access to WhatsApp.

The notification shocked users, at least some of whom use WhatsApp because the encrypted messaging app touts itself as privacy-focused. High-profile figures including Tesla’s CEO Elon Musk, the world’s richest man, recommended users switch to Signal, a much smaller rival encrypted messaging app.

People flocked to Signal in their droves. Signal amassed 7.5 million downloads in the week following WhatsApp’s notification – up 4,200% from the previous week.

WhatsApp soon went into damage-control mode, putting up a new FAQ about the policy change and delaying the deadline for users to agree to the new terms and conditions from February 8 until May 15.

As it happens, it doesn’t look like anything has really changed about how WhatsApp shares data with Facebook. 

The updates to T&Cs were solely to facilitate business accounts on WhatsApp to link up with Facebook’s back-end analytics infrastructure, WhatsApp said. They do not change anything about the way an average user’s data gets passed back to Facebook, it said.

WhatsApp gave users 30 days to opt out of sharing some data with Facebook back in 2016 – Wired reported that this opt-out would still be honored, and WhatsApp confirmed the report to Insider.

What WhatsApp accidentally did with its notification was to highlight to users exactly how much of their data it was already sending back to the Facebook mothership.

“I suspect people were alarmed by being reacquainted with what WhatsApp already share”

Alan Woodward, a cybersecurity expert at the University of Surrey, said WhatsApp made new T&Cs look a lot more scary to users by telling them they’d lose access if they didn’t consent.

“WhatsApp presented this as an ultimatum to users, which never goes down well: accept these new terms or stop using the service. They could perhaps have been a lot clearer up front about what the changes were, in which case many would have simply said okay,” Woodward said.

“I suspect people were alarmed by being reacquainted with what WhatsApp already share,” he said. 

Professor Eerke Boiten of De Montfort University agreed that WhatsApp’s method of sending a notification with what appeared to be an ultimatum was a misstep.

“The main thing they got wrong was putting it into the users’ faces. They’ve alerted users to something that didn’t get massively worse […] in any significant sense, but was a looming problem all along,” Boiten told Insider.

WhatsApp’s shifting attitude to privacy has been a cause for concern among tech industry insiders and privacy advocates for a long time. The decision to increasingly link WhatsApp up with Facebook’s ad business is what drove its cofounder Brian Acton to leave the company – the same is reportedly true for cofounder Jan Koum.

Acton subsequently helped found the non-profit Signal Foundation, which backs Signal.

“The move from WhatsApp to Signal is maybe not justified by the immediate incidence, but in broader terms it’s a good thing,” Boiten added.

Read more: Signal’s CEO reveals how it became a red-hot alternative to WhatsApp without venture capital or a business plan

You can see the difference between how much data WhatsApp collects compared to Signal using the Apple App Store’s new privacy information feature. While WhatsApp cannot read the contents of messages because they are encrypted, it is able to hoover up metadata – i.e., data about an account and its messaging. That includes information like your phone number, as well as who you’re messaging and when.

WhatsApp vs Signal
WhatsApp collects much more data than Signal.

“Metadata is almost as telling as the contents [of a message],” Boiten said. It’s hard to get a clear read on exactly what metadata WhatsApp is sending back to Facebook, Boiten said, as its privacy policy is written with lots of broad language, specifically by promising not to share “account information” but not specifying whether that includes metadata.

Woodward also pointed to WhatsApp’s collection of metadata. “The perverse thing is that WhatsApp encryption is based upon the same as used by Signal, but whilst [WhatsApp] keep the content if your messages confidential they do harvest some metadata, and knowing who talked to whom, when and for how can be valuable data in targeting advertising by identifying affinity group,” he said.

Signal’s focus on privacy does come with a tradeoff: If you make it impossible to gather things like metadata tracking down illegal activity on a messaging app becomes difficult. Signal employees are reportedly worried the company’s explosive growth could mean it attracts extremists, the Verge reported.

Their worries are not without precedent. Far-right users moved to rival encrypted messaging app Telegram after social media app Parler – which is famous for its popularity amongst far-right commentators and had a growth explosion following the US Capitol riots – was booted off its Amazon web servers.

But CEO Moxie Marlinspike thinks the benefits of a truly private messenger outweigh the potential abuses.

“I want us as an organization to be really careful about doing things that make Signal less effective for those sort of bad actors if it would also make Signal less effective for the types of actors that we want to support and encourage […] Because I think that the latter have an outsized risk profile. There’s an asymmetry there, where it could end up affecting them more dramatically,” Marlinspike told the Verge.

While the new WhatsApp notification appears to be a PR blunder, Woodward doesn’t think WhatsApp is in deep trouble long-term.

“WhatsApp still has a critical mass of users and many are quite relaxed about the unwritten social contract that says you can use our service for free in return for us using your data to make a profit,” he said.

Read the original article on Business Insider

Uber fined $59 million by California regulators for repeatedly refusing to turn over data about sexual assaults

uber logo
An Uber logo is shown on a rideshare vehicle during a statewide day of action to demand that ride-hailing companies Uber and Lyft follow California law and grant drivers “basic employee rights”, in Los Angeles, California, U.S., August 20, 2020.

  • Uber must pay a $59.1 million fine in California for repeatedly refusing to turn over data related to its 2019 sexual assault report to the California Public Utilities Commission, an administrative judge ruled Monday.
  • In the ruling, the judge ordered Uber to pay the fine and turn over the data within 30 days or CPUC — which oversees rideshare companies — can revoke Uber’s license to operate in the state.
  • Uber said it refused CPUC’s requests to protect the privacy of survivors, but the judge rejected that argument by noting Uber still wouldn’t hand over the data when given the chance to do so anonymously.
  • The ruling resurfaced Uber’s years-long challenge addressing sexual assault involving its customers and drivers, as well as its history of hardball tactics with regulators.
  • Visit Business Insider’s homepage for more stories.

Uber has been ordered to pay a $59.1 million fine to the California Public Utilities Commission for repeatedly refusing to comply with its requests for data about the company’s 2019 sexual assault report.

On Monday, an administrative law judge ruled that Uber must pay the fine within 30 days and turn over the data or CPUC can revoke Uber’s license to operate within California.

Uber “refused, without any legitimate legal or factual grounds, to comply” with multiple previous administrative rulings ordering it to turn over the data, Monday’s ruling said.

Last December, following intense public pressure, Uber issued a report that said it had received 3,045 reports of sexual assault in the US in 2018 – an average of more than eight per day.

Days later, CPUC – the agency responsible for regulating ridesharing services like Uber – demanded more information from Uber, including the names and contact information for all authors of the safety report, witnesses to the alleged assaults (including victims), and the person at Uber each incident was reported to.

“The CPUC has been insistent in its demands that we release the full names and contact information of sexual assault survivors without their consent. We opposed this shocking violation of privacy, alongside many victims’ rights advocates,” an Uber spokesperson told Business Insider.

Uber had also argued that the data would end up in the hands of “untrained individuals” and that regulators hadn’t asked other rideshare companies for similar information.

In a January ruling, however, an administrative judge addressed Uber’s privacy concerns by allowing the company to submit the information to CPUC under seal to shield it from public view.

Still, Uber refused to comply, and according to Monday’s ruling, “inserted a series of specious legal roadblocks to
frustrate the Commission’s ability to gather information that would allow the Commission to determine if Uber’s TNC operations are being conducted safely.”

An Uber spokesperson blamed the CPUC for delays and adjustments to its data request that resulted in the fine, telling Business Insider: “These punitive and confusing actions will do nothing to improve public safety and will only create a chilling effect as other companies consider releasing their own reports. Transparency should be encouraged, not punished.”

But Monday’s order said that Uber “failed to respect the authority” of the January ruling, instead choosing to “roll the dice” on legal challenges that largely raised the same issues judges had already rejected.

The ruling reflects Uber’s long history of playing hardball with state and local regulators, including by refusing to share data, deceiving authorities, relying on illegal lobbying tactics, and threatening to close up shop when lawmakers try to pass tougher regulations – including regulations aimed at improving rider safety.

Read the original article on Business Insider

Apple and Google have reportedly banned a major data broker from collecting location data from users’ phones amid scrutiny over its national security work

tim cook sundar pichai apple google
  • Apple and Google have banned X-Mode, a major data broker, from collecting location from users whose mobile devices run iOS and Android, The Wall Street Journal reported Wednesday.
  • The tech giants told developers they must remove X-Mode’s tracking software or risk being cut off from their app stores — and therefore the vast majority of mobile devices globally.
  • The move by Apple and Google follows recent reports by The Wall Street Journal and Vice News about X-Mode’s national security contracts and congressional scrutiny over how government agencies purchase Americans’ location data from private companies.
  • Visit Business Insider’s homepage for more stories.

Apple and Google have banned X-Mode Social, a major data broker, from collecting mobile location data from iOS and Android users following criticism of its national security work, The Wall Street Journal reported Wednesday.

The tech giants are requiring developers to remove X-Mode’s tracking software from their apps or they could get cut off from Apple’s App Store and Google’s Play Store, according to The Journal. Apple has given developers two weeks to comply, the newspaper reported.

In a statement to Business Insider, a Google spokesperson said: “We are sending a 7-day warning to all developers using the X-Mode SDK. Apps that need more time due to the complexity of their implementation can request an extension, which can be up to 30 days (including the initial 7-days). If X-Mode is still present in the app after the timeframe, the app will be removed from Play.”

Apple’s iOS and Google’s Android mobile operating systems power nearly all smartphones worldwide, effectively forcing developers to ditch X-Mode, and the policies mark one of the most direct actions against a specific data broker.

“X-Mode collects similar mobile app data as most location and advertising SDKs in the industry. Apple and Google would be setting the precedent that they can determine private enterprises’ ability to collect and use mobile app data,” an X-Mode spokesperson told Business Insider in a statement.

X-Mode is still trying to get information from Apple and Google on why its tracking software is different than what other location data companies – or even Apple and Google themselves – collect, the spokesperson added.

Apple did not immediately respond to a request for comment on this story.

The moves by Apple and Google follow recent reports about how X-Mode sells users’ location data to US defense contractors, and by extension US military, law enforcement, and intelligence agencies – contracts that have drawn scrutiny from lawmakers who argue it undermines Americans’ privacy rights by allowing the government to avoid having to obtain search warrants.

Both Apple and Google disclosed their new policies banning X-Mode to investigators working on behalf of Sen. Ron Wyden, according to The Wall Street Journal. Wyden has been investigating how private companies collect and sell Americans’ mobile location data to the government, often without their knowledge, and has proposed legislation that would ban the practice.

Vice News reported in November that X-Mode collects location data from users via as many as 400 apps, including Muslim prayer and dating apps, weather apps, and fitness trackers, and then sells that data to contractors that work with the US Air Force, US Army, and US Navy. X-Mode CEO Josh Anton told CNN Business in April the company tracks 25 million devices in the US every month.

The Wall Street Journal also reported last month that the US Air Force is indirectly using location data from X-Mode to monitor internet-of-things devices.

Other private data brokers have faced pushback in recent months for similar sales of Americans’ location data to US government agencies and contractors. Lawmakers are investigating Venntel for selling data to the FBI and Department of Homeland Security, who reportedly used the data to surveil illegal immigrants, as well as the IRS for buying data from Venntel.

Read the original article on Business Insider