Israeli military-grade spy software was used to hack phones of journalists, activists, executives, and 2 women connected to murdered journalist Jamal Khashoggi, a report says

Woman holds phone outside NSO Group in Herzliya
An Israeli woman uses her iPhone in front of the building housing the Israeli NSO group, on August 28, 2016, in Herzliya, near Tel Aviv.

  • Military-grade spyware technology was used to hack the smartphones of journalists, activists, and executives, The Washington Post reported.
  • Some of the affected journalists worked at outlets including CNN and The New York Times.
  • The 37 numbers appeared on a list of 50,000 phone numbers in countries with a history of conducting surveillance on their own citizens, according to the report.
  • See more stories on Insider’s business page.

Military-grade spyware technology software created by an Israeli company that sells it to governments for the purpose of countering terrorism and criminal activity was used to target the smartphones of 37 journalists, activists, and business executives, the Washington Post reported Sunday.

The investigation was conducted by the Post and 16 other media partners, according to the report.

Among those who were the subject of attempted smartphone hacking, which used software called Pegasus, include journalists working at CNN, the Associated Press, the New York Times. the Wall Street Journal, Bloomberg, and Voice of America in the US. Targets also included journalists working for Le Monde in France, the Financial Times in London, and Al Jazeera in Qatar, according to the Post report.

Two women connected to the Saudi journalist Jamal Khashoggi, who was murdered in October 2018 in a Saudi consulate in Istanbul, were also on the list, according to the report.

The 37 numbers appeared on a list of 50,000 phone numbers originating mostly from countries with a history of conducting surveillance on their own citizens and those who have a relationship with the Israeli cyber-surveillance firm NSO Group, which created and sells the Pegasus software, according to the Post.

The list was shared with media outlets by the Paris-based non-profit Forbidden Stories and by Amnesty International, according to the report.

The list does not identify who placed the numbers on it. More than 15,000 of the phone numbers on the list were from Mexico while another sizable chunk of numbers came from the Middle Eastern countries, including the United Arab Emirates, Qatar, Saudi Arabia, Bahrain, and Yemen, according to the Post.

Read the full story at The Washington Post

Read the original article on Business Insider

7-Eleven owners are using surveillance cameras to prevent workers from stealing, months after some franchisees paid back $173 million over wage-theft accusations

The front doors to a closed 7-Eleven store are seen with a chain and padlock securing them shut.
7-Eleven and other convenience stores have deployed surveillance tech to prevent worker theft and shoplifting, but the companies’ theft of employee wages often goes unscrutinized.

Workers at some 7-Eleven stores, Shell gas stations, Dairy Queen restaurants, Holiday Inn hotels and other service businesses are being monitored 24/7 via security cameras by a “virtual supervisor,” Vice News reported Monday.

The Washington-based company Live Eye offers the surveillance technology for $399 per month. It is ostensibly meant to help business owners deter shoplifting, robberies, and employee theft.

In Live Eye’s promotional videos, according to Vice, remote supervisors question convenience store workers about whether they paid for drinks and why they’re talking to a person who’s out of view of the camera, and even attempt to scare off suspected robbers in what appears to be a 7-Eleven store.

Theft and fraud aren’t small problems – they cost retailers $62 billion in 2019, according to the National Retail Foundation.

But experts are increasingly raising concerns about whether such intrusive and persistent surveillance may impose even greater costs on workers. They’re also asking why problems like wage theft – with employers in 2019 stealing an estimated $12.6 billion from workers making less than $13 per hour, according to the National Employment Law Project – receive far less scrutiny.

7-Eleven and Live Eye did not respond to requests for comment on this story.

Part of the problem, some experts say, stems from the prejudices many people hold against low-wage workers and low-income individuals.

“When it comes to low-wage work and people who are on welfare, there’s just this general distrust,” Aiha Nguyen, a researcher at the think tank Data & Society who recently wrote a report on workplace surveillance called “The Constant Boss,” told Insider.

“It has nothing to do with technology,” she said, adding there’s a “dual standard” in how employers treat low-wage workers compared to corporate employees and executives.

The NRF’s survey found retail chains fired an average of 559 employees for stealing in 2019, and prosecuted an average of 156, while research by the Economic Policy Institute found employers are rarely prosecuted and underfunded labor regulators often lack the ability to enforce wage laws.

7-Eleven announced last year that some of its franchisees paid $173 million in back wages to more than 4,000 workers in Australia after media outlets including Four Corners and The Sydney Morning Herald reported the company stole as much as half of workers’ wages, doctored pay records to cover up the scheme, and threatened potential whistleblowers with deportation.

During an investigation by Australia’s labor regulator, 7-Eleven’s head office in the country faced accusations that it had bribed franchisees not to testify against its executives. (7-Eleven denied the accusations).

While the company’s chairman, Russell Withers, and CEO Warren Wilmot, eventually resigned, Withers went on to pocket $78 million from selling stakes in 7-Eleven stores. Wilmot faced no financial or criminal penalties as a result of the scandal.

In that context, Nguyen said, theft by 7-Eleven store employees “is a much smaller problem than that bigger issue of wage theft.”

But that’s not to say tools like Live Eye are inherently bad.

“In theory, this system could be helpful” for retail workers, Nguyen said, “but that’s not how it’s being deployed.”

Hiring a security guard would be far more effective at deterring theft, she said, “but what’s being presented is a video camera that may be turned on the workers themselves.” It is not immediately clear if, and how many, 7-Eleven stores employ a security guard in addition to other safety measures.

A former 7-Eleven consultant told Vice that Live Eye was “a solution in search of a problem,” because theft and shoplifting doesn’t cost stores much, and that startling armed robbers violated the company’s policy and could put workers in more danger.

“We provide every 7-Eleven store with a base security system that includes CCTV and alarms, however, independent franchise owners can install their own system on top of what is provided,” a 7-Eleven spokesperson told Vice.

“We’re using insecurity about the risk of robbery as an excuse to target workers,” Eva Blum-Dumontet, a senior researcher at Privacy International, told Vice, adding: “what’s happening with workplace surveillance is employers trying to keep track of their employees to make sure they match their idea of productivity. This is very toxic for the mental health of employees.”

Read the original article on Business Insider

Watch a $399 speaking CCTV camera used in some 7-Eleven stores ask a clerk whether they’ve paid for an iced coffee they grabbed from the fridge

A man wearing a facemask walks past a 7-Eleven sign on an orange wall
7-Eleven has installed security cameras that remote operators can speak through to monitor workers.

  • Some 7-Eleven and Dairy Queen stores use CCTV cameras that can monitor and speak to staff, VICE reported.
  • Live Eye Surveillance’s $399 camera system is designed to deter theft and improve productivity.
  • Promotional footage shows a remote camera operator questioning a worker about an iced coffee.
  • See more stories on Insider’s business page.

Some branches of 7-Eleven and other major companies have installed security cameras that can speak directly to customers to prevent theft – and even monitor staff on their drinks breaks, according to a VICE Motherboard report.

Live Eye Surveillance, a Seattle-based security tech company, makes the cameras for convenience stores, hotels, restaurants, and gas stations, according to its website. The company charges $399 per month, and employs remote workers in India to monitor footage 24/7, according to a sales email viewed by VICE.

The company lists 7-Eleven, Dairy Queen, Holiday Inn, and Shell, among its clients on its website. It is not clear whether these businesses are still using the systems.

Operators can speak through the cameras to question store workers. In one CCTV video published by Live Eye to promote its systems, the camera asks a worker in an unnamed convenience store whether he’d paid for an iced coffee he was drinking.

“Good morning cashier, this is Live Eye Surveillance stream,” the camera operator tells the clerk. “Could you please confirm me that you have scanned the bottle that you took from the cooler?”

In another video, the remote operator intervenes in an armed robbery, startling two attackers, one of whom is carrying a gun. The two men are shown running away before the operator asks the cashier to call 911.

Live Eye’s security systems are designed to “deter thefts and improve profits,” enabling “real time interaction with the employee and protection of assets,” according to the company’s website.

A job posting for a camera operator based in Karnal, India, said that responsibilities include “creating reports for any suspicious activities” involving employees or customers.

“You will act as a virtual supervisor for the sites, in terms of assuring the safety of the employees located overseas and requesting them to complete assigned tasks,” the job description said.

Some of Live Eye’s customers, including 7-Eleven operate as franchises, so it is not clear whether franchisees or the main business purchased its surveillance tech, VICE reported.

“7-Eleven, Inc. cares deeply about the safety of our associates and customers,” 7-Eleven said in a statement to VICE. “We provide every 7-Eleven store with a base security system that includes CCTV and alarms, however, independent franchise owners can install their own system on top of what is provided.”

Live Eye, 7-Eleven, Dairy Queen, Holiday Inn, and Shell did not immediately respond to Insider for comment.

Read the original article on Business Insider

Amazon’s Ring recruited LAPD officers as brand ambassadors to help sell its products through influencer marketing

Four Los Angeles Police Department officers stand outside wearing face masks.
Los Angeles Police Department (LAPD) officers wear facial covering while monitoring an “Open California” rally in downtown Los Angeles, on April 22, 2020. – Over the past week there have been scattered protests in several US states against confinement measures, from New Hampshire, Maryland and Pennsylvania to Texas and California.

Amazon-owned Ring recruited Los Angeles Police Department officers as brand ambassadors to help promote its security cameras and smart doorbells, the Los Angeles Times reported Thursday.

Ring gave free products or discount codes to more than 100 LAPD officers, and encouraged them to share the discount codes with other law enforcement officers and members of the public, according to the Times.

More than 15 of those officers ended up promoting Ring products, and the company ended up giving away tens of thousands of dollars in free product, including at least $12,000 to a single police station, the Times reported.

Amazon did not respond to a request for comment on this story and the LAPD could not immediately be reached.

Ring’s version of an influencer marketing program, which it called “Neighborhood Pillar,” enlisted LAPD officers to “educate members of the community on the benefits of Ring” by sharing discount codes and promotional materials to “influential people in the community that care about crime prevention safety,” with every 10 uses of the discount codes earning them a free Ring device, the Times reported.

Ring told the Times that it ended the program in 2019 and now works directly with community groups.

But LAPD ethics policies prohibit officers from receiving “gifts, gratuities or favors of any kind which might reasonably be interpreted as an attempt to influence their actions with respect to City business.”

An LAPD spokesperson told the Times that an initial review of emails between its officers and Ring didn’t find any violations of that policy.

But privacy and law enforcement experts told the Times that officers’ public safety advice to members of their communities might be biased – or give the appearance of bias – because of the promise of receiving free Ring devices.

Vice previously reported on how Ring aggressively pursued partnerships with law enforcement agencies around the country over the years as it has tried to build out its private surveillance camera network, inking deals with hundreds of police and even fire departments.

The rise of such privately owned networks, which allow police to access camera footage in many cases without a warrant, have drawn criticism from privacy and criminal justice experts, as well as Amazon employees, who say that they present major privacy risks.

Read the original article on Business Insider

Instagram pushes back against ‘The Algorithm’ – because it tracks every action you take on the app using ‘a variety of algorithms’

woman on phone with juice
Instagram said “The Algorithm” does not determine what a user sees on their feed.

  • Instagram said it does not use “The Algorithm” to determine the content on users’ social media feed.
  • In reality, Instagram says it has many different algorithms for Reels, Feed, and Explore.
  • A non-profit recently reported Instagram recommended posts containing misinformation.
  • See more stories on Insider’s business page.

Instagram said there is no one algorithm it uses to determine the content on your social media feed.

Head of Instagram Adam Mosseri in a blog post said many users have a misconception that the firm uses “The Algorithm” to oversee the content that appears for an individual user.

In reality, Instagram uses a “variety” of algorithms and processes. The Feed where posts appear, the Explore section that shows content from non-followers, and TikTok-like video Reels all employ a different algorithm that determine the content a user sees.

Read more: These 19 companies have the most open cybersecurity roles amid surging threats and a nationwide talent gap

“People tend to look for their closest friends in Stories, but they want to discover something entirely new in Explore,” Mosseri said in the blog. “We rank things differently in different parts of the app, based on how people use them.”

Users and organizations have said Instagram’s algorithm can share misinformation and censor content. The non-profit Center for Countering Digital Hate reported Instagram recommended posts containing misinformation and conspiracy theories to 15 volunteers who made new accounts. The firm reportedly made adjustments to the algorithm to favor viral content after users said they could not see content made by Palestinians and Palestinian allies during recent conflicts in Israel.

Algorithm Watch, a research and advocacy organization focused on algorithmic decision-making, found bumped semi-nude photos for a group of 26 volunteer users.

When the platform launched in 2010, Instagram showed users photos posted by their friends in chronological order. The company created algorithms to let users see the posts they care about most to make sure no one missed important content, Mosseri said in the release.

Read the original article on Business Insider

‘Apple is eating our lunch’: Google employees admit in lawsuit that the company made it nearly impossible for users to keep their location private

Google New York Office
Google in Manhattan.

Newly unredacted documents in a lawsuit against Google reveal that the company’s own executives and engineers knew just how difficult the company had made it for smartphone users to keep their location data private.

Google continued collecting location data even when users turned off various location-sharing settings, made popular privacy settings harder to find, and even pressured LG and other phone makers into hiding settings precisely because users liked them, according to the documents.

Jack Menzel, a former vice president overseeing Google Maps, admitted during a deposition that the only way Google wouldn’t be able to figure out a user’s home and work locations is if that person intentionally threw Google off the trail by setting their home and work addresses as some other random locations.

Jen Chai, a Google senior product manager in charge of location services, didn’t know how the company’s complex web of privacy settings interacted with each other, according to the documents.

Google and LG did not respond to requests for comment on this story.

The documents are part of a lawsuit brought against Google by the Arizona attorney general’s office last year, which accused the company of illegally collecting location data from smartphone users even after they opted out.

A judge ordered new sections of the documents to be unredacted last week in response to a request by trade groups Digital Content Next and News Media Alliance, which argued that it was in the public’s interest to know and that Google was using its legal resources to suppress scrutiny of its data collection practices.

The unsealed versions of the documents paint an even more detailed picture of how Google obscured its data collection techniques, confusing not just its users but also its own employees.

Google uses a variety of avenues to collect user location data, according to the documents, including WiFi and even third-party apps not affiliated with Google, forcing users to share their data in order to use those apps or, in some cases, even connect their phones to WiFi.

“So there is no way to give a third party app your location and not Google?” one employee said, according to the documents, adding: “This doesn’t sound like something we would want on the front page of the [New York Times].”

When Google tested versions of its Android operating system that made privacy settings easier to find, users took advantage of them, which Google viewed as a “problem,” according to the documents. To solve that problem, Google then sought to bury those settings deeper within the settings menu.

Google also tried to convince smartphone makers to hide location settings “through active misrepresentations and/or concealment, suppression, or omission of facts” – that is, data Google had showing that users were using those settings – “in order to assuage [manufacturers’] privacy concerns.”

Google employees appeared to recognize that users were frustrated by the company’s aggressive data collection practices, potentially hurting its business.

“Fail #2: *I* should be able to get *my* location on *my* phone without sharing that information with Google,” one employee said.

“This may be how Apple is eating our lunch,” they added, saying Apple was “much more likely” to let users take advantage of location-based apps and services on their phones without sharing the data with Apple.

Read the original article on Business Insider

Amazon drivers describe the paranoia of working under the watchful eyes of new truck cameras that monitor them constantly and fire off ‘rage-inducing’ alerts if they make a wrong move

Amazon Delivery Driver
An Amazon delivery driver.

  • Amazon drivers now have multiple cameras constantly filming them as part of the Driveri system.
  • Drivers told Insider they’re worried about privacy, with cameras monitoring every yawn.
  • They fear they’ll fail to keep up with Amazon’s breakneck pace because of the new surveillance system.
  • See more stories on Insider’s business page.

Many Amazon drivers say the solitude and the independence of working on the road are big draws of the job.

But those perks are under threat since Amazon started installing surveillance cameras in delivery vans that monitor workers’ driving, hand movements, and even facial expressions.

Some workers are paranoid about what the cameras – which peer at them from their windshields and fire off audible alerts following missteps – are watching and how they could be punished for what the technology flags, according to interviews with five drivers.

“I know we’re on a job, but, I mean, I’m afraid to scratch my nose. I’m afraid to move my hair out of my face, you know?” a female driver based in Oklahoma told Insider. “Because we’re going to get dinged for it.”

The Oklahoma driver and several others interviewed asked that their names be withheld for fear that their jobs would be affected, but Insider verified their identities.

Several drivers said the cameras could be helpful in cases of collisions or other dangerous situations. But they also worried about how the technology was affecting their productivity and described concerns with managing bathroom needs, like changing adult diapers, within sight of the cameras.

“We have zero privacy and no margin for error,” a California-based driver said.

Netradyne, the maker of the camera system, did not respond to Insider’s request for comment. A representative for Amazon said in a statement to Insider that Netradyne cameras are used to keep drivers and communities safe. In a pilot of the cameras from April to October 2020, accidents dropped by 48%, stop-sign violations dropped by 20%, driving without a seatbelt dropped by 60%, and distracted driving dropped by 45%, according to the company.

“Don’t believe the self-interested critics who claim these cameras are intended for anything other than safety,” Amazon’s statement said.

The cameras capture yawns, distracted driving, and more

Amazon Driveri instruction video
A still from the instructional video on Amazon’s Netradyne camera system.

The camera system, called Driveri, isn’t made by Amazon. It was created by Netradyne, a transportation company that uses artificial intelligence to monitor fleets of drivers.

The system, mounted on the inside of a windshield, contains four cameras: a road-facing camera, two side-facing cameras, and one camera that faces inward toward the driver. Together, the cameras provide 270 degrees of coverage.

While the cameras record 100% of the time when the ignition is running, Amazon says the system does not have audio functionality or a live-view feature, meaning drivers can’t be watched in real time while they drive. The cameras upload the footage only when they detect one of 16 issues, such as hard braking or a seatbelt lapse, and that footage can be accessed only by “a limited set of authorized people,” Karolina Haraldsdottir, a senior manager for last-mile safety at Amazon, said in a training video about the cameras.

The Driveri system also sounds alerts in four instances: failure to stop, inadequate following distance, speeding, or distracted driving.

The system can be shut off, but only when the ignition is also turned off. Amazon said it would share video data with third parties, such as the police, only in the event of a dangerous incident.

The camera system sparked a backlash from some drivers shortly after it was announced. A driver named Vic told the Thomson Reuters Foundation that the cameras were the final straw that led him to quit, calling them “both a privacy violation and a breach of trust.”

A driver named Angel Rajal told Insider last month that he thought the new cameras were “annoying” and made him feel as if he were always being watched.

“I get a ‘distracted driver’ notification even if I’m changing the radio station or drinking water,” he said.

Read more: Amazon logistics salaries revealed: Here’s what workers bulking out Amazon’s supply chain make, from entry-level analysts to senior management

Drivers say they’re worried about their privacy

Amazon delivery driver
The struggles of Amazon drivers have been in the spotlight recently.

In interviews with Insider, drivers whose vans have the cameras installed highlighted a slew of issues they were facing so far. Lack of privacy is a top concern, they said.

Several drivers said they feared that yawning while driving would result in an infraction for drowsiness. And with some drivers feeling pressured to urinate in bottles on the job, there are concerns about being caught on camera in an uncomfortable position.

Bronwyn Brigham, a driver based in Houston who has driven trucks outfitted with Driveri for about two weeks, told Insider that the presence of the cameras made her feel as if she were being watched and made her worry about how to manage her bathroom needs inside the van.

“I have to wear a Depends because I’m 56,” she said, referring to a type of adult diaper. “If I wet that Depends, I need to take that off. Then the cameras are on, so that makes it hard. If I need to change into another one, they’re watching that.”

“We are all worried that we have zero privacy,” the California driver said. “Considering we have to use bottles to relieve ourselves – is that being watched?”

The ignition must be off to turn off the cameras, but that leaves drivers with no air conditioning.

As a result, drivers in regions that experience extreme heat during the summer will need to choose between privacy and cool air while they take their breaks.

‘Rage-inducing’ voices and guidance ‘designed to make you slower’

A male driver based in Oklahoma who has been driving with the cameras for about a month told Insider that the Driveri system was obstructing his view while he drives, making it difficult to see house numbers – and children playing – on the passenger side of the street.

“I’ve had times where I look up and there’s nobody there, and then all of a sudden the kid pops out from behind where the camera is obstructing the view,” the driver said.

The driver also said the camera’s verbal alerts, which use a computer-generated voice, were distracting and “rage-inducing.” That sentiment was echoed by several other drivers who said the alerts made them feel as if they were being micromanaged.

Several drivers told Insider that they were worried about receiving infractions for handling their phones on the job, even though they need the devices for navigation.

Drivers rely on two apps while they work: Mentor, which monitors driving, and Flex, Amazon’s navigation app. A driver who delivers near the Twin Cities told Insider that he juggled this by loading one app on his work phone and the other on his personal device.

“In order to be successful throughout your day, you have to zoom in and out on the map on the Flex app that you have on a dock that you can look at while driving,” he said. “My concern is that … with the cameras in place, it’s going to be noticing we’re using our phone while driving.”

Keeping up with Amazon’s demands is an ongoing concern for drivers. Some are worried that the new system will slow them down, making it more difficult to deliver all the packages they’re expected to drop off every day, which could be as many as 300.

For example, Driveri is triggered by a “failure to stop” at an intersection. However, the female Oklahoma-based driver said that in situations where a stop sign is several feet before the intersection, she had to stop twice to avoid an infraction, costing her valuable seconds. The California driver said he feared being reprimanded for going just a few miles above the speed limit.

Brigham said that she was doing her best to drive especially carefully now that the cameras are installed and that it was slowing her down. If she’s not moving fast enough, she said, she’ll get a call from her dispatcher – a supervisor who tracks drivers’ progress – telling her she’s running behind in her deliveries.

The male driver from Oklahoma said the new system felt like a Catch-22.

“The job is all about speed and how fast you can get to the door,” he said. “But these cameras and some of the other policies Amazon has in place, it’s like they’re designed to make you slower.”

Being watched by a computer is now part of the job

Amazon delivery
Cameras have advantages and create challenges.

Several of the drivers Insider interviewed said there were advantages to the Driveri system.

If an accident occurs during a delivery, for instance, the system will automatically upload the footage. Drivers will be able to prove if they were paying attention and following the rules of the road.

And the cameras will record outside the delivery van for 20 minutes even if the ignition is turned off, which could help drivers if someone approaches the van to harass or rob them.

Still, drivers say the cameras are a new frustration in an already challenging job.

“I do like my job, but it is stacked up against me,” the California driver said.

The driver said that 99% of the time he enjoyed delivering packages but that the cameras highlighted the extreme demands of the job. Recently, he said, he worked from 10:45 a.m. to 10:10 p.m. He said he did not have time for a single break and had to pee in a bottle twice. The entire time, he was aware the camera was on.

“The part that bothers me the most is that we’re being watched by a computer,” the male driver from Oklahoma said, “and that computer is what makes a judgment as to whether we’re doing something wrong or not, whether or not we get to keep our jobs.”

Are you an Amazon driver with a story to share? We want to hear from you. Email ahartmans@businessinsider.com or ktaylor@businessinsider.com, or via the Signal encrypted messenger app at (646) 768-4740.

Read the original article on Business Insider

Amazon is using new AI-powered cameras in delivery trucks that can sense when drivers yawn. Here’s how they work.

amazon prime van delivery
  • Amazon has installed AI-equipped camera systems in all of its delivery vehicles.
  • The Netradyne system can be triggered by a yawn or a speeding.
  • The new system has sparked some backlash from workers.
  • See more stories on Insider’s business page.

In February, The Information reported on an instructional video for Amazon delivery drivers announcing a new suite of artificial intelligence-equipped cameras to surveil drivers during the entirety of their routes.

The decision sparked some backlash, and one driver told the Thomson Reuters Foundation that the policy change had driven him to quit, calling it an invasion of privacy. But how does it work?

In the introductory video shown to drivers, Amazon’s senior manager for last-mile safety Karolina Haraldsdottir, explains how the “camera-based video safety technology” works.

The camera system is called “Driveri” and manufactured by the AI and transportation startup Netradyne. Four cameras give 270 degrees of coverage: one faces out through the windshield, two face out the side windows, and one faces the driver.

The cameras do not automatically upload, Haraldsdottir stressed in the video. A live feed only comes after the AI detects a problem. There are 16 behaviors that an AI recognizes that trigger the upload, from distracted driving to speeding to “driver drowsiness.”

Amazon Driveri instruction video
A still from the instructional video on Amazon’s Netradyne camera system

Haraldsdottir also stressed that the camera system can be used to “exonerate drivers from blame in safety incidents” and that drivers can trigger a manual upload if there is a safety issue they want to report.

In the report about a driver quitting as a result of this new system, the former employee saw the system as a “sort of coercion.”

Amazon has faced controversy over claims of surveillance in the past. In January of this year, more than 200 workers signed a petition sent to the CEO Jeff Bezos asking for an end to what the employees called “labor surveillance” ahead of unionization efforts.

Read the original article on Business Insider

Huawei reportedly worked with 4 additional companies to build surveillance tools that track people by ethnicity, following recent revelations that it tested a ‘Uighur alarm’

Huawei China
  • Huawei has worked with at least four partner companies to develop surveillance technologies that claim to monitor people by ethnicity, The Washington Post reported Saturday.
  • Last week, The Post reported that Huawei in 2018 had tested a “Uighur alarm” — an AI facial recognition tool that claimed to identify members of the largely Muslim minority group and alert Chinese authorities.
  • Huawei told the The Post that the tool was “simply a test,” but according to Saturday’s report, Huawei has developed multiple such tools.
  • The reports add to growing concern over China’s extensive surveillance and oppression of Uyghurs and other minority groups, as well as increasing use of racially discriminatory surveillance tools and practices by US law enforcement.
  • Visit Business Insider’s homepage for more stories.

Huawei tested an AI-powered facial-recognition technology that could trigger a “Uighur alarm” for Chinese authorities when it identified a person from the persecuted minority group in 2018, The Washington Post reported last week.

At the time, Huawei spokesperson Glenn Schloss told The Post that the tool was “simply a test and it has not seen real-world application.”

But a new investigation published by The Post on Saturday found that Huawei has worked with dozens of security firms to build surveillance tools – and that products it developed in partnership with four of those companies claimed to be able to identify and monitor people based on their ethnicity.

Documents publicly available on Huawei’s website detailed the capabilities of those ethnicity-tracking tools as well as more than 2,000 product collaborations, according to The Post. The publication also reported that after it contacted Huawei, the company took the website offline temporarily before restoring the site with only 38 products listed.

FILE PHOTO: Huawei headquarters building is pictured in Reading, Britain July 14, 2020. REUTERS/Matthew Childs/File Photo
FILE PHOTO: Huawei headquarters building is pictured in Reading

“Huawei opposes discrimination of all types, including the use of technology to carry out ethnic discrimination,” a Huawei spokesperson told Business Insider. “We provide general-purpose ICT [information and communication technology] products based on recognized industry standards.”

“We do not develop or sell systems that identify people by their ethnic group, and we do not condone the use of our technologies to discriminate against or oppress members of any community,” the spokesperson continued. “We take the allegations in the Washington Post’s article very seriously and are investigating the issues raised within.”

Huawei worked with Beijing Xintiandi Information Technology, DeepGlint, Bresee, and Maiyuesoft on products that made a variety of claims about estimating, tracking, and visualizing people’s ethnicities, as well as other Chinese tech companies on tools to suppress citizens’ complaints about wrongdoing by local government officials and analyze “voiceprint” data, according to The Post.

Beijing Xintiandi Information Technology, DeepGlint, Bresee, and Maiyuesoft could not be reached for comment.

Human rights groups, media reports, and other independent researchers have extensively documented China’s mass surveillance and detainment of as many as one million Uyghurs, Kazakhs, Kyrgyz, and other Muslim minority groups in internment camps, where reports allege they are subjected to torturesexual abuse, and forced labor for little or no pay.

To help it build the surveillance apparatus that enables such widespread detainment, the Chinese government has at times turned to the country’s technology firms.

“This is not one isolated company. This is systematic,” John Honovich, the founder of IPVM, a research group that first discovered the 2018 test, told The Post. He added that “a lot of thought went into making sure this ‘Uighur alarm’ works.”

In October 2019, the US Commerce Department blacklisted 28 Chinese government agencies and tech companies including China’s five “AI champions” – Hikvision, Dahua, SenseTime, Megvii, and iFlytek – on its banned “entity list,” thus preventing US firms from exporting certain technologies to them.

Still, some of those blacklisted companies have managed to continue exporting their technologies to Western countries, and BuzzFeed News reported last year that US tech firms, including Amazon, Apple, and Google, have continued selling those companies’ products to US consumers via online marketplaces.

In the US, law enforcement agencies and even schools have also increased their reliance on facial recognition software and other AI-powered surveillance technologies, despite growing evidence that such tools exhibit racial and gender bias.

But recent pushback from activists, tech ethicists, and employees has pushed some tech companies to temporarily stop selling facial recognition tools to law enforcement, and some US cities have issued moratoriums on their use, highlighting some divides between approaches to policing in the US and China.

Read the original article on Business Insider

Apple and Google have reportedly banned a major data broker from collecting location data from users’ phones amid scrutiny over its national security work

tim cook sundar pichai apple google
  • Apple and Google have banned X-Mode, a major data broker, from collecting location from users whose mobile devices run iOS and Android, The Wall Street Journal reported Wednesday.
  • The tech giants told developers they must remove X-Mode’s tracking software or risk being cut off from their app stores — and therefore the vast majority of mobile devices globally.
  • The move by Apple and Google follows recent reports by The Wall Street Journal and Vice News about X-Mode’s national security contracts and congressional scrutiny over how government agencies purchase Americans’ location data from private companies.
  • Visit Business Insider’s homepage for more stories.

Apple and Google have banned X-Mode Social, a major data broker, from collecting mobile location data from iOS and Android users following criticism of its national security work, The Wall Street Journal reported Wednesday.

The tech giants are requiring developers to remove X-Mode’s tracking software from their apps or they could get cut off from Apple’s App Store and Google’s Play Store, according to The Journal. Apple has given developers two weeks to comply, the newspaper reported.

In a statement to Business Insider, a Google spokesperson said: “We are sending a 7-day warning to all developers using the X-Mode SDK. Apps that need more time due to the complexity of their implementation can request an extension, which can be up to 30 days (including the initial 7-days). If X-Mode is still present in the app after the timeframe, the app will be removed from Play.”

Apple’s iOS and Google’s Android mobile operating systems power nearly all smartphones worldwide, effectively forcing developers to ditch X-Mode, and the policies mark one of the most direct actions against a specific data broker.

“X-Mode collects similar mobile app data as most location and advertising SDKs in the industry. Apple and Google would be setting the precedent that they can determine private enterprises’ ability to collect and use mobile app data,” an X-Mode spokesperson told Business Insider in a statement.

X-Mode is still trying to get information from Apple and Google on why its tracking software is different than what other location data companies – or even Apple and Google themselves – collect, the spokesperson added.

Apple did not immediately respond to a request for comment on this story.

The moves by Apple and Google follow recent reports about how X-Mode sells users’ location data to US defense contractors, and by extension US military, law enforcement, and intelligence agencies – contracts that have drawn scrutiny from lawmakers who argue it undermines Americans’ privacy rights by allowing the government to avoid having to obtain search warrants.

Both Apple and Google disclosed their new policies banning X-Mode to investigators working on behalf of Sen. Ron Wyden, according to The Wall Street Journal. Wyden has been investigating how private companies collect and sell Americans’ mobile location data to the government, often without their knowledge, and has proposed legislation that would ban the practice.

Vice News reported in November that X-Mode collects location data from users via as many as 400 apps, including Muslim prayer and dating apps, weather apps, and fitness trackers, and then sells that data to contractors that work with the US Air Force, US Army, and US Navy. X-Mode CEO Josh Anton told CNN Business in April the company tracks 25 million devices in the US every month.

The Wall Street Journal also reported last month that the US Air Force is indirectly using location data from X-Mode to monitor internet-of-things devices.

Other private data brokers have faced pushback in recent months for similar sales of Americans’ location data to US government agencies and contractors. Lawmakers are investigating Venntel for selling data to the FBI and Department of Homeland Security, who reportedly used the data to surveil illegal immigrants, as well as the IRS for buying data from Venntel.

Read the original article on Business Insider