New York’s Landmark AI Bias Law Prompts Uncertainty

- Advertisement -


Companies using AI in hiring are trying to determine how to comply with New York law, which mandates that they test their systems for potential bias.

- Advertisement -

But the requirement has posed some compliance challenges. Unlike the familiar financial audit, refined over decades of accounting experience, the AI ​​audit process is new and without clearly established guidelines.

- Advertisement -

“There is a bigger concern, which is not clear what constitutes an AI audit,” said Andrew Burt, managing partner at AI-focused law firm BNH. “If you are an organization that is using these tools of some sort … it can be very confusing.”

City law would potentially affect a larger number of employers. According to the New York State Department of Labor, there were fewer than 200,000 businesses in New York City in 2021.

- Advertisement -

A New York City spokeswoman said its Department of Consumer and Worker Protection is working on rules to enforce the law, but has not had time to decide when they can be published. He did not respond to inquiries about whether the city had a response to complaints about a perceived lack of guidance.

Beyond the immediate effect in New York City, employers are confident that audit requirements will soon be required in more jurisdictions, said Kevin White, co-chair of the labor and employment team at law firm Hunton Andrews Kurth LLP.

AI has steadily entered the human-resources departments of many companies. According to research published earlier this year by the Society for Human Resource Management, nearly one in four human resources use automation, AI, or both to support activities. This number has increased to 42% in companies with more than 5,000 employees.

Other studies have estimated higher levels of use among occupations.

AI technology could help businesses hire candidates more quickly, said Emily Dickens, SHRM’s head of government affairs, in the midst of a “war for talent.”

Boosters for the technology have argued that, used well, it can prevent potentially unfair biases from creeping into hiring decisions. For example, a person may inadvertently favor a candidate who attends the same college or root for a certain team while the computer does not have alma maters or favorite sports team.

HireVue Inc. A human mind is the “ultimate black box” with its hidden motivations, said Lindsey Zuloaga, chief data scientist at HireVue, as opposed to an algorithm whose responses to various inputs can be examined, which Unilever plc and Craft Lists Heinz. Co.

Among its clients, offers software that can automate interviews.

But, if companies aren’t careful, AI “can become very biased at scale. Which is scary,” said Ms. Zuloga, adding that she supports the investigation that AI systems have begun to receive.

He added that HireVue’s systems are regularly audited for bias, and the company wants to make sure customers feel comfortable with its tools.

For example, an audit of HireVue’s algorithms published in 2020 found that minority candidates were more likely to give short answers to interview questions that said things like “I don’t know”, resulting in their responses being interpreted as human. will be flagged for. review. HireVue changed how its software deals with short answers to solve the problem.

The US Chamber of Commerce, which lobbies on behalf of businesses, said businesses have concerns about “opacity and a lack of standardization” about what is expected in AI auditing.

Even greater is the potential impact on small businesses, said Jordan Crenshaw, vice president of the Chamber’s Technology Engagement Center.

Hunton’s Mr White said many companies have had to scramble to determine the extent to which they use AI systems in the employment process. Companies have not adopted a uniform approach requiring that the executive function “own” the AI. In some, human resources drive the process, and in others, it is handled by the chief privacy officer or information technology, he said.

“They realize very quickly that they have to form a committee across the company to find out where all the AI ​​is sitting,” he said.

Because New York does not offer clear guidelines, it expects a variety of approaches to be taken in audits. But difficulties in compliance are not holding companies back to pre-AI era processes, he said.

“It’s very useful to put it back on the shelf,” he said.

Some critics have argued that the New York law does not go far enough. The Surveillance Technology Oversight Project, the New York Civil Liberties Union and other organizations noted a lack of standards for bias audits, but pushed for harsher penalties in a letter sent before the law was passed. He argued that companies selling equipment deemed biased should potentially face punishment, among other suggestions.

Regulators are not necessarily looking for perfection in the early days.

“The true trust effort is exactly what regulators are looking for,” said Liz Grenon, co-leader of digital trust at McKinsey & Co. “Literally, regulators are going to learn.”

Ms. Grenon said some companies are not waiting until the January effective date to act.

To some extent, companies are motivated as much by reputation risk as by the fear of a regulator taking action. For large corporations with high-profile brands, concerns about social impact and environmental, social and governance issues may outweigh concerns about being “slapped by a regulator,” said the chief executive of AI governance software company Monitaur Inc. Anthony Habayb said.

“If I am a large enterprise… I want to be able to demonstrate that I know AI can have problems,” said Mr. Habayb. “And instead of waiting for someone to tell me what to do… I built controls around these applications because I know, like any software, things can and do go wrong.”

Richard Vanderford at [email protected]

Credit: www.Businesshala.com /

- Advertisement -

Recent Articles

Related Stories