Artificial Intelligence & Robotics

Could New York City's AI hiring law be a model for other U.S. city and state regulations?

  •  
  •  
  •  
  • Print

Artificial intelligence regulation

Shutterstock

As federal and state lawmakers grapple with generative artificial intelligence’s possibilities and risks, New York City may be the closest thing to a regulatory trailblazer.

In October, the city published an Artificial Intelligence Action Plan outlining initiatives to guard against the risks of AI, including bias. The city council’s technology committee has proposed legislation to form an Office of Algorithmic Data Integrity to test city agencies’ AI systems for bias and discrimination.

But New York City’s action on AI predates even these developments. In 2021, it enacted an AI hiring law, and enforcement began in July.

Employers have increasingly turned to AI to speed up the process of screening job candidates and making hiring decisions, with 1 in 4 organizations using automation or AI, including in recruitment and hiring, according to a 2022 survey by the Society for Human Resource Management.

The law bars New York City employers from using so-called automated employment decision tools for screening, hiring and promotion unless they take an annual audit to test for gender and racial bias. They also must notify job candidates and employees if they are using the tech. The city can fine businesses for noncompliance, starting at $500 and up to $1,500 for subsequent violations.

With federal and state governments scrambling to regulate the technology or risk falling behind other nations, could New York City’s AI hiring law serve as a blueprint?

Rebecca Engrav. Photo courtesy of Perkins Coie.

Rebecca S. Engrav, partner and firmwide co-chair of artificial intelligence and machine learning at Perkins Coie in Seattle, says there are “themes” in New York City’s law that other AI regulations will repeat.

“One of the main purposes of the New York law is trying to counteract bias and discrimination risk,” Engrav says. “I would expect we will continue to see that in other laws as well.”

The law could foreshadow requirements that businesses and organizations use due diligence when they use the tech, Engrav says, and she expects transparency and notice requirements to show up in other laws.

Call your lawyer

If New York City’s process is any litmus test, state and federal lawmakers and regulators should expect some bumps in the road. Public interest groups say the regulations let employers off too lightly and fail to cover hiring decisions based on age or disability. In November 2021, the digital rights group Center for Democracy and Technology called the bill “deeply flawed” and said New York City was squandering the chance to create “a model for jurisdictions around the country to follow.”

The business community criticized the law too. Independent auditing of AI is still an undeveloped field with little oversight or standards, according to the Software Alliance, a trade group of software companies, including Microsoft, IBM and Zoom.

In January, it argued in comments to the city’s Department of Consumer and Worker Protection that independent audits are “not a feasible or optimal approach” and said “internal personnel” should be allowed to audit software “so long as they were not involved in the development of the AI system.” Some legal experts have said that the rules are too vague.

Daniel Kadish. Photo courtesy of Morgan Lewis & Bockus

That’s where law firms will come in. Daniel A. Kadish, an employment lawyer in New York City with Morgan, Lewis & Bockius, says employers are going to need the help of attorneys so they don’t violate the law. “Universally, all of the clients we work with want to comply with the law,” Kadish adds. “The headaches are in trying to wade through some of the ambiguous provisions that make the law either confusing or unclear.”

Most employers don’t rely on proprietary software to make hiring decisions, Kadish says. Instead, they use software from third-party vendors that could be reluctant to reveal confidential information about their tech.

Added to that, Kadish says, there is uncertainty about which data should be used in the bias audit. “The rules don’t provide any real criteria. An organization is recommended to use historical data, if possible. But there’s no definition of what qualifies as historical data. Those are all open questions,” he says.

Stephen S. Wu, an attorney with the Silicon Valley Group in San Jose, California, advises clients on artificial intelligence and compliance. The law is well-intentioned, Wu says, but it’s unclear who qualifies as an independent auditor and the criteria or standards they would use to test the AI.

New Jersey is among the other states attempting to bar AI hiring software unless employers agree to a bias audit. Lawmakers in Maryland have enacted a law requiring consent from job candidates before an employer uses facial recognition analysis software during the interview process. In Illinois, consent is required if an employer uses AI video analysis of interviews. California is proposing limits on AI if it is shown to screen candidates based on race, sex and other protected characteristics.

Playing catch-up

As the tech develops at lightning speed, states and the federal government are trying to keep up. According to the National Conference of State Legislatures, at least 25 states, Washington, D.C., and Puerto Rico introduced AI bills in the 2023 legislative session, and 15 states and Puerto Rico enacted legislation or adopted resolutions.

On Oct. 30, President Joe Biden issued an executive order on AI adopting safeguards to protect Americans against the potential risks of the technology and encourage innovation. The sweeping order addresses a wide array of dangers related to privacy, security, bias and disruptions to the labor market.

Although generative AI chatbots are a new development, AI has been around for years and is already a part of our everyday lives, Engrav says. In the midst of the scramble to create new laws and regulations, she says, lawmakers and regulators may want to consider whether existing laws can do the work for them. She notes, for instance, how discrimination laws have been on the books for years. “We certainly are at an inflection point in terms of AI technology,” Engrav says. “But every law has a cost to it; there are burdens of complying even with things that seem like simple laws.”

This story was originally published in the February-March 2024 issue of the ABA Journal under the headline: “Model Rules?: Could New York City’s AI hiring law be a model for other U.S. city and state regulations?”

Give us feedback, share a story tip or update, or report an error.