Pharmacy chain Rite Aid has been denied the right to run facial recognition systems in its stores for five years, by a Federal Trade Commission (FTC) ruling. The regulator found so many flaws in the retailer’s surveillance program that it concluded Rite Aid had failed to implement reasonable procedures and prevent harm to consumers in its use of facial recognition technology in hundreds of stores.
In May 2023, the FTC issued a warning that the increasing use of consumers’ biometric information and related technologies, including those powered by machine learning, raises significant consumer privacy and data security concerns, and the potential for bias and discrimination.
In a policy statement,the commission said:
“The agency is committed to combatting unfair or deceptive acts and practices related to the collection and use of consumers’ biometric information and the marketing and use of biometric information technologies.”
According to the FTC, Rite Aid deployed artificial intelligence-based facial recognition technology from 2012 to 2020, in order to identify customers who may have engaged in shoplifting or other problematic behavior.
The FTC found that Rite Aid deployed a massive, error-riddled surveillance program, provided by vendors that could not properly safeguard the personal data the chain hoarded. The company also failed to inform consumers that it was using the technology in its stores.
According to the complaint, Rite Aid contracted with two companies to help create a database of images of individuals considered persons of interest for engaging in or attempting to engage in criminal activity at one of its retail locations. The images were derived from CCTV camera’s in the stores and smartphone pictures taken by employees. These were stored in a database along with their names and other information, such as any criminal background data.
Despite the fact that the system relied on low-quality images to identify these so-called persons of interest, the chain instructed staff to ask these customers to leave its stores. The employees, acting on false positive alerts caused by the flawed system, followed consumers around the stores, searched them, ordered them to leave, called the police to confront or remove consumers, and publicly accused them of shoplifting or other wrongdoing.
According to the complaint, Rite Aid’s system falsely flagged numerous customers, including an 11 year-old girl whom employees searched based on a false-positive result. The FTC says that Rite Aid did nothing to prevent their customers from being falsely accused. In addition, the FTC says Rite Aid’s actions disproportionately impacted people of color.
But even if the system had been completely accurate, there were enough problems in the way the system was deployed:
This is not the first time Rite Aid and the FTC have clashed. In 2010, Rite Aid agreed to FTC charges that it failed to protect the sensitive financial and medical information of its customers and employees, in violation of federal law.
Rite Aid violated the 2010 data security order, and in addition to the ban and required safeguards for automated biometric security or surveillance systems the FTC requires the company to:
While the FTC ruling highlights Rite Aid’s wrongdoings, it also acknowledges the fact that there are many problems with facial recognition. Because of the privacy implications some tech giants have backed away from the technology, or halted development.
People should at least be informed about when and why facial recognition technology is used, so they can decide for themselves whether they want to participate.
We don’t just report on threats—we remove them
Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.