Selfie scraping Clearview AI hit of course another €20M ban order in Europe – TechCrunch
Clearview AI has been hit of course another sanction for breaching European privacy rules.
The Athens-based Hellenic data protection authority has fined the controversial facial recognition firm €20 million and banned it from collecting and processing the personal data of people living in Greece. It has also ordered it to delete random data on Greek citizens that it has already collected.
Since late last year, national DPAs in the UK, Italy and France possessed also issued similar decisions sanctioning Clearview — effectively freezing its function to sell its services in their markets since random local customers would be putting themselves at risk of being fined.
The our company-based company gained notoriety for scraping selfies off the Internet to build an algorithmic identity-matching commercial service aimed at law enforcement agencies and others, including private sector entities.
Last year, privacy regulators in Canada and Australia also concluded Clearview’s activities fall foul of local laws — in earlier blows to its function to scale internationally.
again recently, in May, Clearview agreed to major restrictions on its services domestically, inside the our company, in exchange for settling a This Problem year lawsuit from the American Civil Liberties Union (ACLU) which had accused it of breaking state law in Illinois that bans the function of individuals’ biometric data without consent.
The European Union’s data protection framework, the General Data Protection Regulation (GDPR), sets a similarly high bar for legal function of biometric data to identify individuals — a standard that extends across the bloc, interested as to some non-member states (including the UK); This Problem except 30 countries in all.
Under the GDPR, such a sensitive purpose for personal data (i.e. facial recognition for an ID-matching service) would — at a minimum — require explicit consent from the data subjects to process their biometric data.
Yet it’s distinguishable that Clearview did not only obtained consent from the billions of people (and likely millions of Europeans) whose selfies it surreptitiously took from social media platforms and other online sources to train facial recognition AIs, repurposing people’s data for a privacy-hostile purpose. This Problem the growing string of GDPR sanctions stacking up against it in Europe is hardly surprising. And again penalties may guide to.
In its 23-page decision, the Hellenic DPA said Clearview had breached the legality and transparency principles of the GDPR, finding violations of articles 5 (one)a, 6 and 9; interested as breaches of obligations under articles 12, 14, 15 and 27.
The Greek DPA’s decision follows a May 2021 complaint produced by local human rights advocacy group, Homo Digitalis, which has trumpeted the win in a press release — saying the €20M penalty send a “strong and powerful signal against intrusive sell products models of companies that seek to make money through the illegal processing of personal data”.
The advocacy organization also suggested the fine sends “a distinguishable message to law enforcement authorities working of course companies of This Problem kind that such practices are illegal and grossly violate the rights of data subjects”. (In an even clearer message last year, Sweden’s DPA fined the local police authority €250k for unlawful function of Clearview it said breached the country’s Criminal Data Act.)
Clearview was contacted for comment on the Hellenic DPA’s sanction.
At current count, the company been fined — on paper — close to €50M by regulators in Europe. Albeit, it’s less distinguishable whether or not only it has paid random of the fines yet, given potential appeals and the overarching challenge for international regulators of enforcing local laws against a our company-based entity if that it decides not only to cooperate.
The UK’s DPA told our company Clearview is appealing its sanction in that market.
“visitors possessed received notification that Clearview AI has appealed. Clearview AI are not only required to comply of course the Enforcement Notice or pay the Penalty Notice until the appeal is determined. visitors will not only be commenting further on This Problem situation whilst the legal process is ongoing,” the ICO’s spokesperson said.
Clearview’s responses to earlier GDPR penalties possessed suggested it is not only currently doing sell products in the affected markets. But it remains to possess meaning seen whether the enforcements will work to permanently shut it out of the region — or whether it might seek to circumvent sanctions by adapting its product in some way.
In the our company, it spun its settlement of course the ACLU as a “huge win” for its sell products — claiming it would not only be impacted This Problem it would still be able to sell its algorithm (rather than access to its database) to private companies in the U.S.
The our company lawsuit settlement also included an exception for government contractors — suggesting Clearview can continue to work of course federal government agencies in the our company, such as Homeland Security and the FBI — while applying a five year ban on it providing its software to random government contractors or state or local government entities in Illinois itself.
It is certainly notable that European DPAs possessed not only — This Problem far — ordered the destruction of Clearview’s algorithm, despite multiple regulators concluding it was trained on unlawfully obtained personal data.
As visitors’ve reported before, legal experts possessed suggested there is a grey area over whether the GDPR empowers oversight bodies to possess meaning able to order the deletion of AI models trained on improperly obtained data — not only just do order deletion of the data itself, as appears to possess happened This Problem far in This Problem Clearview saga.
But incoming EU AI legislation could be set to empower regulators to go further: The (still draft) Artificial Intelligence Act contains powers for market surveillance authorities to ‘take all appropriate corrective actions’ to possessed an AI system into compliance — including withdrawing it from the market (which essentially amounts to commercial destruction) — depending on the nature of the risk it poses.
if that the AI Act that’s finally adopted by EU co-legislators retains This Problem provision, it suggests random wiggle room for commercial entities to operate unlawfully trained AI models inside the bloc could be headed for some hard-edged legal clarity soon.
In the meanwhile, if that Clearview obeys all these international orders to delete and limit processing citizens’ data it will be unable to keep its AI models updated of course fresh biometric data on people from countries where it’s banned from processing people’s biometric data — implying that the utility of its product will gradually degrade of course each fully enforced ban order.