Earlier this month, HUD issued a final rule revising its 2013 FHA disparate impact standards to more appropriately align the rule with the U.S. Supreme Court’s 2015 decision in Texas Department of Housing and Community Affairs v. Inclusive Communities Project, Inc., which held that disparate impact claims are cognizable under the FHA.

CDIA filed a comment in conjunction with this rulemaking on algorithms. As noted in a BallardSpahr blog

HUD did not adopt in the final rule the proposed defense for reliance on a “sound algorithmic model.” HUD stated that this aspect of the proposed rule was “unnecessarily broad,” and the agency expects there will be further developments in the laws governing emerging technologies of algorithms, artificial intelligence, machine learning and similar concepts, so it would be “premature at this time to directly address algorithms.”  Therefore, HUD removed that defense option at the pleading stage for defendants.  As a practical matter, this means that disparate impact cases based on the use of scoring models will be based on the general burden-shifting framework set forth above, which ultimately would require a plaintiff to show that a model’s predictive ability could be met by a less discriminatory alternative.

…[t]he final rule also establishes a uniform standard for determining when a housing policy or practice with a discriminatory effect violates the FHA and clarifies that application of the disparate impact standard is not intended to affect state laws governing insurance. The final rule largely adopts the proposed disparate impact rule HUD issued in 2019, with several clarifications and certain substantive changes.

the “final rule codifies a new burden-shifting framework for analyzing disparate impact claims to reflect the Inclusive Communities decision, and requires a plaintiff to sufficiently plead facts to support five elements at the pleading stage that ‘a specific, identifiable policy or practice’ has a discriminatory effect on a protected class group under the FHA.”