Experts say the $10 million fine imposed on Clearview AI by the UK’s data-privacy watchdog sets clear ground rules for balancing software innovation with people’s right to privacy.
“Clearview AI was working well outside the confines of many AI practitioners,” said Jeremy Howard, co-founder of Fast.ai, an online service that provides resources for AI developers and researchers. “Knowing that this kind of use of personal imagery is being punished is encouraging for those of us who want to make useful tools in an ethical way,” he said.
Eric Schmidt, former CEO of Alphabet Inc. Of
Within the AI market, facial recognition is a special case of a technology that it expects to be “super regulated,” said Google and the chairman of the Federal National Security Commission on Artificial Intelligence.
Mr Schmidt said many of the key benefits of AI-enabled systems, including software tools designed to accelerate disease detection and diagnosis, require huge amounts of personal data. Beyond facial images and biometric data, he said, “we need to agree on what other information should be so restricted,” giving individuals a chance to opt out.
New York-based startup Clearview AI has collected billions of facial images and personal information from Facebook,
LinkedIn and other websites, which it uses to train facial recognition software to identify individuals based on facial scans.
The UK’s Information Commissioner’s Office fined Clearview AI more than £7.5 million on Monday, saying an investigation had determined the company collected more than 20 billion images of people without their approval.
Although the company no longer provides facial recognition services to UK-based organizations, the agency said, it has continued to use citizens’ images and personal data. In addition to the fine, the agency ordered Clearview AI to remove the data from its systems.
Other countries that have taken similar regulatory action against Clearview AI include France, Italy and Australia.
Clearview AI CEO Hon Ton-That said the company only collects public data from the Internet and complies with “all standards of privacy and law.” He said UK regulators are preventing advanced technology from being used by law enforcement agencies to help solve “heinous crimes against children, seniors and other victims of dishonest acts”.
“While privacy is an important value, a balance must be struck with respect to the use of data that is already public that can be used to enhance the accuracy of artificial intelligence, namely facial recognition,” said Mr. Ton-That said.
Clearview AI has been criticized for providing law enforcement agencies in both the US and Canada with facial recognition capabilities – in some cases offering free trials – which critics say are being used against ethnic minorities and other groups.
buy sildenafil canada https://www.parkviewortho.com/wp-content/languages/new/canada/sildenafil.html no prescription
May include algorithmic bias.
Broad commercial applications of facial recognition technology include store and workplace security, targeted advertising and product recommendations, online payments, and other apps and services triggered by facial scans.
Earlier this month, Clearview AI agreed to limit sales of its image database as part of a legal settlement with the American Civil Liberties Union in circuit court in Cook County, Illinois. The settlement stems from a 2020 lawsuit brought by the ACLU that claimed Clearview violated the Biometric Information Privacy Act by collecting the biometric identifiers of Illinois residents without their consent. State law enacted in 2008 regulates the collection, use and management of biometric data by private entities.
The US currently has no specific law governing the technology, with many proposed bills stalling or failing to pass through legislative committees.
Dahlia Peterson, a research analyst at Georgetown University’s Center for Security and Emerging Technology, said the UK regulator’s move is unlikely to hinder Clearview AI’s use of facial recognition technology or its expanding potential. “The fines that come after the fact can do little to prevent image data exploitation,” Ms Peterson said.
He added that strict privacy protections in the UK and Europe have forced technology companies there to be innovative, such as developing automated face-pixelation capabilities for live video surveillance. Ms Peterson said efforts could also be made to improve the accuracy of AI models that use synthetic biometric data instead of images of real people.
Greater regulatory certainty may prompt companies to invest in research and development that serves the public interest rather than harming it, said David Leslie, director of ethics and responsible innovation research at the Alan Turing Institute, the UK’s national research center for data. aligns with. Science and artificial intelligence.
Ari Lightman, professor of digital media and marketing at Carnegie Mellon University’s Heinz College of Information Systems and Public Policy, said tightening the screws on companies like Clearview AI will have an immediate impact on how companies use the data they collect. as well as how and where they collect it. “Data gathering will have to check boxes involving ethical, regulatory and legal precedent or may result in punitive measures,” Mr Lightman said.
Stephen Messer, Co-Founder and Vice President of Software Makers Collective[i]said a heavy-handed approach to facial recognition regulations in Europe and elsewhere puts advanced developers at risk of pursuing “larger, less regulated markets”.
Write Angus Loten at [email protected]
Credit: www.Businesshala.com /