“We may collect biometric identifiers and biometric information as defined under US laws, such as faceprints and voiceprints, from your User Content. Where required by law, we will seek any required permissions from you prior to any such collection,” the new policy reads.
As noted by TechCrunch, which earlier reported on the changes, that language could allow TikTok the ability to collect most US users’ biometric data without explicitly asking them, due to the fact that only a few states have laws restricting companies from collecting such data.
TikTok didn’t respond to Insider’s questions about whether it had already begun collecting users’ biometric data. However, the new language is found within a section titled “information we collect automatically,” meaning TikTok could potentially be collecting it already.
TechCrunch also noted that the policy doesn’t define “faceprints” or “voiceprints,” or explain why TikTok needs this data in the first place.
In February, TikTok paid $92 million to settle a class-action lawsuit in Illinois over allegations that it violated the state’s biometric data privacy law.
Last year, the Trump administration unsuccessfully attempted to ban TikTok from the US entirely, claiming its ownership by Beijing-based ByteDance posed a national security threat.
While President Joe Biden on Thursday issued an executive order banning Americans from investing in Chinese firms linked to surveillance of religious and ethnic minorities, his administration hasn’t taken an explicit position on TikTok.
Newly unredacted documents in a lawsuit against Google reveal that the company’s own executives and engineers knew just how difficult the company had made it for smartphone users to keep their location data private.
Google continued collecting location data even when users turned off various location-sharing settings, made popular privacy settings harder to find, and even pressured LG and other phone makers into hiding settings precisely because users liked them, according to the documents.
Jack Menzel, a former vice president overseeing Google Maps, admitted during a deposition that the only way Google wouldn’t be able to figure out a user’s home and work locations is if that person intentionally threw Google off the trail by setting their home and work addresses as some other random locations.
Jen Chai, a Google senior product manager in charge of location services, didn’t know how the company’s complex web of privacy settings interacted with each other, according to the documents.
Google and LG did not respond to requests for comment on this story.
A judge ordered new sections of the documents to be unredacted last week in response to a request by trade groups Digital Content Next and News Media Alliance, which argued that it was in the public’s interest to know and that Google was using its legal resources to suppress scrutiny of its data collection practices.
The unsealed versions of the documents paint an even more detailed picture of how Google obscured its data collection techniques, confusing not just its users but also its own employees.
Google uses a variety of avenues to collect user location data, according to the documents, including WiFi and even third-party apps not affiliated with Google, forcing users to share their data in order to use those apps or, in some cases, even connect their phones to WiFi.
“So there is no way to give a third party app your location and not Google?” one employee said, according to the documents, adding: “This doesn’t sound like something we would want on the front page of the [New York Times].”
When Google tested versions of its Android operating system that made privacy settings easier to find, users took advantage of them, which Google viewed as a “problem,” according to the documents. To solve that problem, Google then sought to bury those settings deeper within the settings menu.
Google also tried to convince smartphone makers to hide location settings “through active misrepresentations and/or concealment, suppression, or omission of facts” – that is, data Google had showing that users were using those settings – “in order to assuage [manufacturers’] privacy concerns.”
Google employees appeared to recognize that users were frustrated by the company’s aggressive data collection practices, potentially hurting its business.
“Fail #2: *I* should be able to get *my* location on *my* phone without sharing that information with Google,” one employee said.
“This may be how Apple is eating our lunch,” they added, saying Apple was “much more likely” to let users take advantage of location-based apps and services on their phones without sharing the data with Apple.
Facebook blocked ads that Signal wanted to buy that would show Instagram users the data that Facebook collects from them, according to the encrypted messaging company.
In a blog post entitled “The Instagram ads Facebook won’t show you,” Signal said the likes of Facebook are driven to collect people’s data to sell, and the company wanted to showcase how that technology works. So it tried to buy “multi-variant targeted” ads on Instagram “designed to show you the personal data that Facebook collects about you and sells access to.” Facebook responded by shutting down Signal’s account, the blog post said.
“Being transparent about how ads use people’s data is apparently enough to get banned; in Facebook’s world, the only acceptable usage is to hide what you’re doing from your audience,” the company wrote in its post.
Signal posted examples of what the ads would look like on its blog. One reads: “You got this ad because you’re a newlywed pilates instructor and you’re cartoon crazy. This ad used your location to see you’re in La Jolla. You’re into parenting blogs and thinking about LGBTQ adoption.”
CEO Moxie Marlinspike tweeted another example that shows how a user could be targeted with ads based on their job, location, dietary preferences, and fitness interests.
Signal and Facebook did not immediately respond to Insider’s requests for comment.
Facebook has taken down ads critical of the company before. In 2019, Democratic Sen. Elizabeth Warren, who was running for office at the time, ran ads that laid out her plan to split up Facebook as well as other big tech companies. Facebook said it blocked the ads because they violated its rules around using the company’s corporate logo but eventually reinstated them.
Facebook’s ad business relies upon data tracking to inform its algorithm that decides which ads to put in front of online users, and it’s lucrative: It bolstered the social media giant’s Q1 revenue to $26.17 billion, up 48% from this time last year. The company attributed the rise to an increase in the average price per ad as well as the number of ad impressions.
Facebook has been vocal about its ad business being at risk thanks to a new privacy update that Apple has rolled out. The latest iOS update includes the company’s “App Tracking Transparency” feature that forces app developers to ask for permission to collect and track users’ data. Facebook has argued that the new feature will hurt small businesses that rely on personalized ads.
Google is facing a lawsuit after a privacy vulnerability in its contact tracing system left users’ data exposed.
Google was alerted in February that users’ sensitive data was exposed to third-party apps that were already installed on their mobile devices and that it failed to inform the public, according to the analysis company AppCensus and a report from The Markup. Google told The Verge that it was finding a solution to the privacy flaw.
The lawsuit, filed by two users affected in a US district court in California, is demanding that Google fix the security problem and be held accountable for “damages and restitution.”
“Because Google’s implementation of GAEN allows this sensitive contact tracing data to be placed on a device’s system logs and provides dozens or even hundreds of third parties access to these system logs, Google has exposed GAEN participants’ private personal and medical information associated with contact tracing, including notifications to Android device users of their potential exposure to COVID-19,” the lawsuit reads.
Apple and Google did not immediately respond to Insider’s request for comment.
In May 2020, the technology was made available to public health officials around the world to integrate with government health apps and verify and log a user’s contact status. Experts told Insider’s Aaron Holmes last April that the system would only be useful if Apple and Google could recruit enough people to use it.
Contact tracing apps that use the technology run on both Google’s Android and Apple’s iOS operating systems. The analysis that AppCensus conducted did not find the flaw on iPhones running the software, according to The Verge.
Several Tesla vehicle models, including the Model 3 and Model Y, record and transmit video footage of drivers and passengers via in-car cameras. The cameras are designed to help Tesla develop its full-self driving software, but present a serious privacy risk, according to Consumer Reports.
John Davisson, senior counsel at the Electronic Privacy Information Center (EPIC), told Consumer Reports the footage opens Tesla drivers up to a whole host of privacy concerns, including the potential for outside parties to gain access to the data for malicious purposes, as well as Tesla itself using the data for its own gain.
“It may later be repurposed for a system that is designed to track the behaviors of the driver, potentially for other business purposes,” Davisson told Consumer Reports.
Jake Fisher, senior director of Consumer Reports’ auto-test center, told Insider the most concerning aspect of their investigation into the cameras was that Tesla was not being entirely transparent about how the cameras were being used.
“Tesla could be using these cameras to stop crashes and they’re using it for studies, to help Tesla develop more things,” Fisher told Insider. “Tesla is the only automaker that has hardware that could help stop crashes, but isn’t using it for the driver’s safety.”
Tesla did not immediately respond to a request for comment on the report.
Tesla CEO Elon Musk said on Twitter that Tesla has used the in-car cameras to remove its full self-driving software from drivers that “did not pay sufficient attention to the road.”
Other car companies, including BMW, Ford and General Motors have elaborate driver monitoring systems, but they have focused the systems on driver safety over collecting data. Consumer Reports notes the car companies do not record, save or transmit the data and use infrared technology to identify a driver’s eye movements or head position instead of video cameras.
While Tesla does not use the in-car cameras to alert the driver to potential safety concerns, the company does use a real-time driver-engagement tool via steering wheel inputs that analyze the amount of pressure put on the wheel to keep drivers alert.
Consumer Reports said the steering wheel inputs can be easily tricked. “Just because a driver’s hands are on the wheel doesn’t mean their attention is on the road,” said Kelly Funkhouser, program manager for vehicle interface testing at Consumer Reports. Fisher told Insider in-car cameras could help save a lot of lives.
Tesla drivers can opt-out of sharing the in-car videos via their control settings and the Cabin Camera is disabled by default. According to Tesla’s site, the camera will only turn on before a crash or automatic emergency braking (AEB) activation.
The Global Privacy Control (GPC) is a technology initiative being spearheaded by a group of publishers and technology companies to create a global setting in web browsers that allows users to control their privacy online. This means you should be able to set the GPC control in your browser to prevent websites from selling your personal data.
Why the Global Privacy Control feature is important
In recent years, there has been increasing scrutiny on privacy rights online. In 2018, the European Union’s General Data Protection Regulation (GDPR) went into effect, limiting the data websites can collect on EU citizens. The California Consumer Privacy Act (CCPA) is a similar legislative measure that went into effect in California in 2020.
While there is enhanced interest in online privacy and some governments are taking steps to limit what websites can do with user data, there is no global way for users to opt-out of having their personal information sold or used in ways they don’t approve of. Every website that needs to comply with legal mandates – or simply implement more progressive privacy policies – must implement an opt-out mechanism on its own.
The GPC is built to inform websites not to sell user data. This is different from other privacy tools that might limit tracking but might still allow user data to be sold (or to sell that data itself).
When fully implemented, the GPC may allow you to opt-out of having your personal data sold by the websites you visit.
Status of the Global Privacy Control feature
Buoyed by these new laws, the GPC is intended to be a single, global setting users can activate in their web browser that signals to all websites the user’s intention about their data privacy.
Currently, the specification is being written by an informal consortium of more than a dozen organizations including the Electronic Frontier Foundation (EFF), the National Science Foundation, The New York Times, Mozilla, The Washington Post, and Consumer Reports.
The specification that will govern how the GPC will be implemented and behave is still in development, though in principle, it simply allows a website to read a value (such as Sec-GPC-field-value = “1”) to know that the user has chosen to opt-out of having their data sold.
A number of web browsers and browser extensions have implemented the GPC in its draft form. Moreover, adoption of the GPC privacy settings carries no legal weight. If you use a browser or extension with the GPC feature, at this time no websites are obligated to respect its setting – compliance with the GPC is voluntary.
The app sent users a notification asking them to sign off on updated terms and conditions, which stipulated it could share reams of metadata – including their phone numbers, locations, and contacts – with its parent company Facebook. If users did not consent, the notification said, they would lose access to WhatsApp.
The notification shocked users, at least some of whom use WhatsApp because the encrypted messaging app touts itself as privacy-focused. High-profile figures including Tesla’s CEO Elon Musk, the world’s richest man, recommended users switch to Signal, a much smaller rival encrypted messaging app.
WhatsApp soon went into damage-control mode, putting up a new FAQ about the policy change and delaying the deadline for users to agree to the new terms and conditions from February 8 until May 15.
As it happens, it doesn’t look like anything has really changed about how WhatsApp shares data with Facebook.
The updates to T&Cs were solely to facilitate business accounts on WhatsApp to link up with Facebook’s back-end analytics infrastructure, WhatsApp said. They do not change anything about the way an average user’s data gets passed back to Facebook, it said.
WhatsApp gave users 30 days to opt out of sharing some data with Facebook back in 2016 – Wired reported that this opt-out would still be honored, and WhatsApp confirmed the report to Insider.
What WhatsApp accidentally did with its notification was to highlight to users exactly how much of their data it was already sending back to the Facebook mothership.
“I suspect people were alarmed by being reacquainted with what WhatsApp already share”
Alan Woodward, a cybersecurity expert at the University of Surrey, said WhatsApp made new T&Cs look a lot more scary to users by telling them they’d lose access if they didn’t consent.
“WhatsApp presented this as an ultimatum to users, which never goes down well: accept these new terms or stop using the service. They could perhaps have been a lot clearer up front about what the changes were, in which case many would have simply said okay,” Woodward said.
“I suspect people were alarmed by being reacquainted with what WhatsApp already share,” he said.
Professor Eerke Boiten of De Montfort University agreed that WhatsApp’s method of sending a notification with what appeared to be an ultimatum was a misstep.
“The main thing they got wrong was putting it into the users’ faces. They’ve alerted users to something that didn’t get massively worse […] in any significant sense, but was a looming problem all along,” Boiten told Insider.
WhatsApp’s shifting attitude to privacy has been a cause for concern among tech industry insiders and privacy advocates for a long time. The decision to increasingly link WhatsApp up with Facebook’s ad business is what drove its cofounder Brian Acton to leave the company – the same is reportedly true for cofounder Jan Koum.
Acton subsequently helped found the non-profit Signal Foundation, which backs Signal.
“The move from WhatsApp to Signal is maybe not justified by the immediate incidence, but in broader terms it’s a good thing,” Boiten added.
You can see the difference between how much data WhatsApp collects compared to Signal using the Apple App Store’s new privacy information feature. While WhatsApp cannot read the contents of messages because they are encrypted, it is able to hoover up metadata – i.e., data about an account and its messaging. That includes information like your phone number, as well as who you’re messaging and when.
Woodward also pointed to WhatsApp’s collection of metadata. “The perverse thing is that WhatsApp encryption is based upon the same as used by Signal, but whilst [WhatsApp] keep the content if your messages confidential they do harvest some metadata, and knowing who talked to whom, when and for how can be valuable data in targeting advertising by identifying affinity group,” he said.
Signal’s focus on privacy does come with a tradeoff: If you make it impossible to gather things like metadata tracking down illegal activity on a messaging app becomes difficult. Signal employees are reportedly worried the company’s explosive growth could mean it attracts extremists, the Verge reported.
But CEO Moxie Marlinspike thinks the benefits of a truly private messenger outweigh the potential abuses.
“I want us as an organization to be really careful about doing things that make Signal less effective for those sort of bad actors if it would also make Signal less effective for the types of actors that we want to support and encourage […] Because I think that the latter have an outsized risk profile. There’s an asymmetry there, where it could end up affecting them more dramatically,” Marlinspike told the Verge.
While the new WhatsApp notification appears to be a PR blunder, Woodward doesn’t think WhatsApp is in deep trouble long-term.
“WhatsApp still has a critical mass of users and many are quite relaxed about the unwritten social contract that says you can use our service for free in return for us using your data to make a profit,” he said.
Uber must pay a $59.1 million fine in California for repeatedly refusing to turn over data related to its 2019 sexual assault report to the California Public Utilities Commission, an administrative judge ruled Monday.
In the ruling, the judge ordered Uber to pay the fine and turn over the data within 30 days or CPUC — which oversees rideshare companies — can revoke Uber’s license to operate in the state.
Uber said it refused CPUC’s requests to protect the privacy of survivors, but the judge rejected that argument by noting Uber still wouldn’t hand over the data when given the chance to do so anonymously.
The ruling resurfaced Uber’s years-long challenge addressing sexual assault involving its customers and drivers, as well as its history of hardball tactics with regulators.
Uber has been ordered to pay a $59.1 million fine to the California Public Utilities Commission for repeatedly refusing to comply with its requests for data about the company’s 2019 sexual assault report.
On Monday, an administrative law judge ruled that Uber must pay the fine within 30 days and turn over the data or CPUC can revoke Uber’s license to operate within California.
Uber “refused, without any legitimate legal or factual grounds, to comply” with multiple previous administrative rulings ordering it to turn over the data, Monday’s ruling said.
Last December, following intense public pressure, Uber issued a report that said it had received 3,045 reports of sexual assault in the US in 2018 – an average of more than eight per day.
Days later, CPUC – the agency responsible for regulating ridesharing services like Uber – demanded more information from Uber, including the names and contact information for all authors of the safety report, witnesses to the alleged assaults (including victims), and the person at Uber each incident was reported to.
“The CPUC has been insistent in its demands that we release the full names and contact information of sexual assault survivors without their consent. We opposed this shocking violation of privacy, alongside many victims’ rights advocates,” an Uber spokesperson told Business Insider.
Uber had also argued that the data would end up in the hands of “untrained individuals” and that regulators hadn’t asked other rideshare companies for similar information.
In a January ruling, however, an administrative judge addressed Uber’s privacy concerns by allowing the company to submit the information to CPUC under seal to shield it from public view.
Still, Uber refused to comply, and according to Monday’s ruling, “inserted a series of specious legal roadblocks to frustrate the Commission’s ability to gather information that would allow the Commission to determine if Uber’s TNC operations are being conducted safely.”
An Uber spokesperson blamed the CPUC for delays and adjustments to its data request that resulted in the fine, telling Business Insider: “These punitive and confusing actions will do nothing to improve public safety and will only create a chilling effect as other companies consider releasing their own reports. Transparency should be encouraged, not punished.”
But Monday’s order said that Uber “failed to respect the authority” of the January ruling, instead choosing to “roll the dice” on legal challenges that largely raised the same issues judges had already rejected.
Apple and Google have banned X-Mode, a major data broker, from collecting location from users whose mobile devices run iOS and Android, The Wall Street Journal reported Wednesday.
The tech giants told developers they must remove X-Mode’s tracking software or risk being cut off from their app stores — and therefore the vast majority of mobile devices globally.
The move by Apple and Google follows recent reports by The Wall Street Journal and Vice News about X-Mode’s national security contracts and congressional scrutiny over how government agencies purchase Americans’ location data from private companies.
Apple and Google have banned X-Mode Social, a major data broker, from collecting mobile location data from iOS and Android users following criticism of its national security work, The Wall Street Journal reported Wednesday.
The tech giants are requiring developers to remove X-Mode’s tracking software from their apps or they could get cut off from Apple’s App Store and Google’s Play Store, according to The Journal. Apple has given developers two weeks to comply, the newspaper reported.
In a statement to Business Insider, a Google spokesperson said: “We are sending a 7-day warning to all developers using the X-Mode SDK. Apps that need more time due to the complexity of their implementation can request an extension, which can be up to 30 days (including the initial 7-days). If X-Mode is still present in the app after the timeframe, the app will be removed from Play.”
Apple’s iOS and Google’s Android mobile operating systems power nearly all smartphones worldwide, effectively forcing developers to ditch X-Mode, and the policies mark one of the most direct actions against a specific data broker.
“X-Mode collects similar mobile app data as most location and advertising SDKs in the industry. Apple and Google would be setting the precedent that they can determine private enterprises’ ability to collect and use mobile app data,” an X-Mode spokesperson told Business Insider in a statement.
X-Mode is still trying to get information from Apple and Google on why its tracking software is different than what other location data companies – or even Apple and Google themselves – collect, the spokesperson added.
Apple did not immediately respond to a request for comment on this story.
The moves by Apple and Google follow recent reports about how X-Mode sells users’ location data to US defense contractors, and by extension US military, law enforcement, and intelligence agencies – contracts that have drawn scrutiny from lawmakers who argue it undermines Americans’ privacy rights by allowing the government to avoid having to obtain search warrants.
Both Apple and Google disclosed their new policies banning X-Mode to investigators working on behalf of Sen. Ron Wyden, according to The Wall Street Journal. Wyden has been investigating how private companies collect and sell Americans’ mobile location data to the government, often without their knowledge, and has proposed legislation that would ban the practice.
Vice News reported in November that X-Mode collects location data from users via as many as 400 apps, including Muslim prayer and dating apps, weather apps, and fitness trackers, and then sells that data to contractors that work with the US Air Force, US Army, and US Navy. X-Mode CEO Josh Anton told CNN Business in April the company tracks 25 million devices in the US every month.
The Wall Street Journal also reported last month that the US Air Force is indirectly using location data from X-Mode to monitor internet-of-things devices.