Some tech executives have expressed concern that the non-binding guidelines could undermine regulation related to artificial intelligence.
Still, some technology leaders said the White House blueprint could lead to heavy-handed regulation that could risk putting American businesses at a disadvantage.
Rob Zuber, chief technology officer of San Francisco-based software company CircleCI, said legislative efforts play a role in controlling AI, but could also stifle innovation. “It is up to tech leaders to create an environment in which their teams are equally accountable for keeping their AI efforts under control,” Mr. Zuber said.
“I won’t regulate things until we have to,” said Eric Schmidt, former Alphabet CEO. Inc. of
Google. “There are a lot of things that could prevent early regulation from being discovered,” Mr. Schmidt said.
Others said they welcomed the guidelines as providing a framework for regulatory clarity in an obscure area of the technology market.
“The AI systems that enter our lives are often built in ways that conflict directly with these principles,” said Mark Surman, executive director of the Mozilla Foundation, the nonprofit behind the Firefox browser. “They are designed to collect personal data, to be intentionally opaque, and to learn from existing, often biased data sets,” said Mr. Surman, an advocate of online privacy.
He urged federal lawmakers to expand the framework to “something formal and enforceable”.
The guidelines, released Tuesday, “identify five principles that should guide the design, use and deployment of automated systems to protect the American public in the age of artificial intelligence.”
The principles are:
• Protecting people from unsafe or ineffective automated systems
• Preventing Discrimination by Algorithms
• Protecting people from abusive data practices and letting them know how their data is used
• Notifying people that an automated system is being used
• Allowing users to opt out of automated systems
“Algorithms used in hiring and credit decisions have been found to reflect and reproduce existing unwanted inequalities or to embed new harmful bias and discrimination,” the White House said. Unchecked online data collected from social media, the guidelines said, “has been used to jeopardize people’s opportunities, undermine their privacy, or broadly track their activity—often their information.” or without consent.”
The US guidelines are not as strict as the European Union’s General Data Protection Regulation, created four years ago, which authorizes hefty fines for companies that do not comply with the rules regarding the way companies collect personal data and use them.
GDPR obliges technology companies operating in the EU, including Microsoft Corporation
and Amazon.com Inc.,
Change your approach to collecting, leveraging and sharing user data to boost compliance efforts and, in some cases, change your approach.
Last year, the EU’s executive branch proposed legislation setting rules for the use of AI in designated “high-risk” areas, including critical infrastructure, college admissions and loan applications – which account for the company’s annual worldwide revenue. Fines of up to 6% were imposed. most serious violation.
“Globally, America is playing catch-up,” said Peter van der Puten, director of the AI Lab at Pegasystems, a Massachusetts-based software company. Inc.
“But in a rapidly growing global market, many US companies will have to adhere to global policies anyway.”
Oren Etzioni, co-founder of the Allen Institute for AI, a Seattle-based research organization, and chairman of the AI advisory board at UiPath, a New York-based software-automation vendor Inc.,
He said he expects the White House guidelines to influence government policies and advance regulatory and legislative measures.
“If implemented properly, the bill can reduce the abuse of AI and still support beneficial use of AI in medicine, driving, enterprise productivity, and more,” Mr. Etzioni said.
Angus Loten at [email protected]
Credit: www.Businesshala.com /