Yesterday, the CFPB put out an Innovation Spotlight blog, Providing adverse action notices when using AI/ML models. The posting notes that one area of innovation the CFPB is “monitoring is artificial intelligence (AI), and particularly a subset of AI, machine learning (ML). For example, in 2017, the Bureau issued a Request for Information Regarding Use of Alternative Data and Modeling Techniques in the Credit Process (RFI). We also issued a No-Action Letter to Upstart Network, Inc., a company that uses ML in making credit decisions, and later shared key highlights from information provided by Upstart.” While “AI has the potential to expand credit access by enabling lenders to evaluate the creditworthiness of some of the millions of consumers who are unscorable using traditional underwriting techniques[,] it may create or amplify risks, including risks of unlawful discrimination, lack of transparency, and privacy concerns. Bias in the source data or model construction can also lead to inaccurate predictions. In considering AI or other technologies, the Bureau is committed to helping spur innovation consistent with consumer protections.”
This latest CFPB posting follows other FTC and CFPB AI releases relative to consumer reporting. The CFPB released its latest fair lending report, Protecting consumers and encouraging innovation: 2019 Fair Lending Report to Congress with a covering blog posting. Of note is a section, Innovations in access to credit with a subsection on “providing adverse action notices when using artificial intelligence and machine learning models.” In this section, the Bureau wrote that “artificial intelligence (AI), and more specifically, machine learning (ML), a subset of AI…”, will be an area of where the Bureau will be monitoring for fair lending and credit access. A good summary of the AI section is found in a BallardSpahr blog. There is regulatory flexibility to handle questions of AI and machine learning, according to the Bureau. This was the message CDIA provided in a comment to the OMB earlier this year in connection with a January 2020 Request for Comments on a Draft Memorandum to the Heads of Executive Departments and Agencies, Guidance for Regulation of Artificial Intelligence Applications.
Additional interpretations about this new blog posting are available at a Morrison Foerster blog.
In April 2020, Andrew Smith the Director of the FTC’s Bureau of Consumer Protection released a blog posting on businesses’ use of AI and algorithms that offers insights in to how the Bureau will look at AI through its consumer protection mission. Smith said that “the use of AI tools should be transparent, explainable, fair, and empirically sound, while fostering accountability. We believe that our experience, as well as existing laws, can offer important lessons about how companies can manage the consumer protection risks of AI and algorithms.” Smith’s blog advised businesses to: be transparent, explain their decisions to consumers, ensure that their decisions are fair, ensure that their data and models are robust and empirically sound, and to hold themselves accountable for compliance, ethics, fairness and nondiscrimination. The blog noted the FTC’s thinking on AI has been influenced in part by its 2016 report, Big Data: A Tool for Inclusion or Exclusion?, which advised companies using big data analytics and machine learning to reduce the opportunity for bias. Most recently, we held a hearing in November 2018 to explore AI, algorithms, and predictive analytics.
In January 2020, the Office of Management and Budget (OMB) issued a Request for Comments on a Draft Memorandum to the Heads of Executive Departments and Agencies, Guidance for Regulation of Artificial Intelligence Applications. The January 2020 request follows a February 2019 Executive Order on Maintaining American Leadership in Artificial Intelligence. CDIA filed a comment in connection with the OMB request.