Mind Your Business

Anti-discrimination laws are faltering in the face of artificial intelligence; here's what to do about it

  •  
  •  
  •  
  • Print

discrimination concept with magnifying glass

Image from Shutterstock.

“On the basis of.” These four words from the Civil Rights Act of 1964 underlie the modern conception of illegal bias. From federal anti-discrimination statutes to state and local laws, this phrase nearly unites them all.

The problem? As artificial intelligence systems automate more analyses, it is all too easy for decisions to become less transparent and decision-makers less accountable. Some conscientious developers are improving AI systems’ technical transparency. But those advances are not universal, and they do not necessarily explain the processes behind those systems’ outputs to the public or the courts. Plus, as industries get “techified,” two problems emerge: First, decision-making supply chains involve more actors; and second, regulators treat the most technical actors differently from consumer-facing service providers.

Practical “black boxes,” complicated decision-making and fractured regulation together make these four words—the causal underpinning of U.S. anti-discrimination laws—less effective at stopping discrimination in practice.

Mind Your Business logo

Existing anti-discrimination laws are therefore at risk of being undermined or, in some circumstances, becoming obsolete. States and cities are fighting back. New York City has passed new regulations for AI discrimination in employment contexts. California has proposed AI-specific amendments to state employment law and is investigating AI discrimination in health care.

Connecticut is acting to combat algorithmic discrimination in insurance. Meanwhile, the D.C. Council is considering a bill to address AI discrimination in education, employment, housing, credit, health care and insurance. A similar, employment-focused bill was proposed in New Jersey in December.

But growing state and local efforts cannot ensure that every U.S. citizen is protected from discrimination in all its forms. We need to restore the power and purpose of federal laws too.

Take, for example, the Fair Housing Act. Its anti-discrimination provisions’ applicability to AI is being challenged in a case called Connecticut Fair Housing Center v. Corelogic before a federal district court in Connecticut. The case focuses on whether third-party software providers can be held liable under the FHA. In that case, a man applied to move into his mother’s apartment. The property manager ran his application through an automated tenant screening service, which reported only that the applicant had “disqualifying records.” When the property manager denied the application, the man’s mother sued under the FHA, alleging discrimination on the basis of race and disability.

But Corelogic, the software company behind the automated tenant screening service, argues that because it isn’t itself a landlord, the anti-discrimination provisions of the FHA do not apply to its algorithms. Therefore, Corelogic asserts, it cannot be held responsible under the FHA for its landlord customers’ allegedly biased housing decisions, even if Corelogic’s own software is found to be discriminatory.

Early decisions by the court signal that Corelogic’s arguments might not win; it might be held liable under the FHA even if it is only supporting housing-related decisions. Courts have not settled precisely when and how algorithms and their developers can be held accountable for their role in supporting and influencing others’ decisions. If one company’s algorithm provides a recommendation, and another company’s manager decides in part due to that recommendation, is the algorithm’s original developer liable for that decision’s consequences? Is the company that deployed someone else’s old algorithm but in a new context liable? Or does the buck always stop with the human decisionmaker, effectively immunizing the algorithm and its makers?

The confusion does not arise because anti-discrimination laws do not apply to software; they do. Rather, sector-specific regulation has prevented government actors from recognizing the applicability of these laws and then enforcing them across increasingly fragmented commercial algorithmic ecosystems.

Additionally, certain features of AI systems make the reasoning behind their recommendations harder to explain. Legal scholars including Anya Prince, a professor at the University of Iowa College of Law, and Daniel Schwarcz, a professor at the University of Minnesota Law School, have illuminated AI’s potential to discriminate by proxy. Proxy discrimination is an important issue—but the challenge is greater than proxies alone. AI systems identify patterns in vast amounts of data, in ways that exceed human capacities. Often, even a model’s creators and owners can’t explain how or why their system made a particular decision. That is in part because AI-enabled systems rely more on correlations than on causations.

Translate this lack of clear causation to existing legal frameworks, and we have a problem. How can anti-discrimination laws be enforced when AI systems do not produce outputs “on the basis of” any identifiable variable at all?

Organizations and governments can address these problems in several ways.

First, regulators should clarify how existing anti-discrimination laws apply outside of the limited areas of housing, finance and employment (as one of us has written about before). Cases like Corelogic are instructive: the fact that Corelogic is a software company and not a landlord itself should not take the company outside the scope of anti-discrimination laws. The Federal Trade Commission and the Equal Employment Opportunity Commission are already waking up to this issue and attempting to broaden their authorities to better govern algorithmic systems.

Second, organizations adopting algorithmic systems can apply current anti-discrimination standards to their AI decision-making systems, even where their operation appears to fall outside a specific law’s previous scope of enforcement. When new laws are enacted or existing laws formally expanded, early adopters will have protected themselves. With the newly passed and proposed laws mentioned above and a wave of forthcoming legislation, the urgency and wisdom of proactivity is mounting.

In practice, this means that organizations should test algorithmic systems for bias early and often. It is important to detect correlations between demographic attributes, even if their role in the software’s decision-making cannot be fully understood. Doing so will require organizations to overcome an instinctive aversion to collecting sensitive data. Some mistakenly believe that ignorance of such information will make their system fairer. In reality, blindness to protected categories has been demonstrated not to increase fairness. More rather than less data collection is sometimes necessary to mitigate bias. One FTC commissioner explicitly encourages private actors to abandon this type of “blindness” approach.

Regulators should recommend and facilitate bias testing. They might follow the Consumer Financial Protection Bureau in releasing approved methods to infer protected data attributes like race or sex; this allows bias testing without requiring direct access to sensitive personal data.

Are these steps enough? Surely not. There’s more to be done in combating algorithmic discrimination in fragmented commercial ecosystems. In a society marked by systemic inequality, the work needed to decrease bias in AI systems is likely to be unending. And where bias does persist, courts will have to grapple with a new prevalence of correlational rather than causal decision-making. That will be a long and unpredictable evolution.

But the complexity of the challenge does not mean that progress cannot be made. Nor does it mean that organizations and governments cannot achieve the original aims of anti-discrimination laws as artificial intelligence finds new uses. For executives and regulators alike, the time to act is now.


Andrew Burt is co-founder and managing partner of BNH.AI, a boutique law firm focused on AI and analytics, and a visiting fellow at Yale Law School’s Information Society Project.

Eleanor Runde is a research associate at BNH.AI and a JD candidate at Yale Law School.


Mind Your Business is a series of columns written by lawyers, legal professionals and others within the legal industry. The purpose of these columns is to offer practical guidance for attorneys on how to run their practices, provide information about the latest trends in legal technology and how it can help lawyers work more efficiently and strategies for building a thriving business.


Interested in contributing a column? Send a query to [email protected].


This column reflects the opinions of the author and not necessarily the views of the ABA Journal—or the American Bar Association.

Give us feedback, share a story tip or update, or report an error.