AI Regulatory Recap: Q1 2024 Updates and Takeaways
It has been a busy start to the year in the land of AI regulation. Below is a recap.
United States
The majority of the activity in the U.S. has been at the state level. Utah leads the pack with passage of the Artificial Intelligence Policy Act, Senate Bill 149, which was signed into law on March 13 and takes effect less than two months later on May 1. The two main aspects of the law are the imposition of transparency obligations on certain regulated business and the creation of a dedicated entity to oversee the development and regulation of AI technology in the state. The notice obligations focus on the use of generative artificial intelligence to interact with consumers and require notice before initiating an interaction for the provision of licensed services like healthcare and accountancy or upon the consumer’s request when the interaction relates to a regulated activity like consumer sales and credit. The act creates the Office of Artificial Intelligence Policy in the state’s Department of Commerce to create and administer an artificial intelligence learning laboratory program and consult with businesses and other stakeholders in the state about potential regulatory proposals.
Other states, meanwhile, have been taking a more comprehensive approach to AI regulation focusing not just on transparency but on other issues like algorithmic discrimination and governance. State bills to watch on this front include California’s Assembly Bill 2930, Connecticut’s Senate Bill 2, and Colorado’s Senate Bill 205. All three focus on AI tools that make, or are a controlling factor in making, consequential decisions in areas including education, employment, essential goods or services, financial or lending services, healthcare, and legal services and impose obligations on both developers (those who make these tools) as well as deployers (those who use these tools). California’s bill refers to these systems as automated decision tools while the Connecticut and Colorado bills focus on high-risk uses. All three would generally require developers to provide a statement of intended uses to deployers and implement measures to mitigate reasonably foreseeable risks of algorithmic discrimination. Deployers would be required to perform impact assessments, provide notice to consumers, and implement a governance program. The legislative sessions close on May 8 in Colorado and Connecticut and on August 31 in California.
In March, the California Privacy Protection Agency issued revised draft regulations on automated decision-making technology. The rules would apply when using such technology for (a) making a significant decision resulting in access to financial or lending services, housing, insurance, education, criminal justice, employment, healthcare, or essential goods or services; (b) conducting extensive profiling in the areas of work, education, publicly accessible places, or behavioral advertising; or (c) training automated decision-making technology for purposes including making significant decisions, establishing individual identity, engaging in physical or biological identification or profiling, or generating a deepfake. Under such circumstances, businesses would be required to provide a pre-use notice to consumers, allow them the ability to opt out, and provide the right to access information about the business’s use of the technology. Formal rulemaking has not yet been initiated.
In January, the New York Department of Financial Services issued a proposed circular letter with recommendations for using external consumer data and information sources in insurance underwriting or pricing. Recommendations include understanding data sources and data sets, performing bias testing of systems, implementing governance frameworks, establishing a consumer complaints process, managing vendors, and providing information when certain adverse decisions are made. The comment period for this circular closed in March and, so far, the department has not issued an update.
At the federal level, there have been a few bills, the most notable of which are the Generative AI Copyright Disclosure Act, which would require transparency from companies regarding their use of copyrighted work to train their generative AI models, and the American Privacy Rights Act, which would provide consumers the right to opt-out of consequential decisions (read our full analysis of this bill here). But the more significant activity has been at the federal agency level. Specifically:
- The U.S. Federal Trade Commission warned in February that retroactively amending privacy policy terms to allow use of data for AI training may be unfair or deceptive. It has been a longstanding FTC principle that material, retroactive changes to privacy practices require consumer consent. The FTC has now emphasized that this applies to uses of data for training AI systems.
- During a speech in February, the chair of the Securities and Exchange Commission shared guidance on AI disclosures by public companies. The chair advised that disclosures about material AI risks should be tailored to the company and may require defining for investors how AI is being used and whether it is being developed internally or supplied by a third party.
- Lastly, the Office of Management and Budget finalized guidance to federal agencies regarding risk management steps the federal government must take when using AI. The guidance flows from President Biden’s Executive Order on AI (a detailed summary of which is accessible here) and includes recommendations for strengthening AI governance, advancing responsible AI innovation, and managing risks from the use of AI.
International
The EU AI Act crossed the finish line and will enter into force 20 days after official publication, which will likely occur during the second or third quarter of 2024. Six months later, the prohibitions on unacceptable risk AI kick in, followed by the main obligations 18 months after that. The key requirements center on high-risk AI systems and general purpose AI systems. High-risk AI systems include those used for job recruiting, monitoring or evaluating performance and behavior in the workplace, determining creditworthiness or establishing a credit score, and pricing life or health insurance, among others. General purpose AI systems are those that are capable of performing a wide range of distinct tasks and are trained with a large amount of data using self-supervision at scale.
Under the AI Act, most of the requirements fall on providers (equivalent to “developers” in the U.S. bills described above) to implement a risk management system; ensure appropriate data governance; maintain technical documentation; provide deployers with instructions for use; and allow for human oversight. But there are also significant obligations for deployers, including, for example, to implement technical and organizational measures to ensure use in accordance with the developer’s instructions; assign human oversight; ensure input data is relevant and sufficiently representative; monitor and maintain certain records; and provide notice to individuals where systems are used to make decisions or assist in doing so. In addition, providers and deployers must complete a fundamental rights impact assessment when using AI for credit scoring or pricing life or health insurance.
In the UK, members of Parliament are pushing for a nationwide approach to regulating AI, as opposed to the sector-specific approach previously advocated by the government. By way of background, in 2023 the Department of Science, Innovation, and Technology initiated a public consultation on proposed recommendations for regulators to consider when issuing rules within their respective areas (so, different sets of rules for healthcare, finance, etc.). In this new bill, regulation would be centralized at the national level in an AI Authority, which would be responsible for ensuring alignment of approach across relevant regulators in respect to AI. The bill has passed a second reading in the house of introduction (the House of Lords) and is in the committee stage.
Key Takeaways
Regulatory focus is coalescing around several key concepts: transparency, material impact on individuals, and mitigating potential bias. With this in mind, companies should have a thorough understanding of how their teams are adopting AI and, especially, whether it is being used to make decisions that affect individuals. Performing risk assessments of certain high-risk uses, like in the human resources context, may help companies more easily respond to new or expanding regulations by identifying the potential for biased outcomes or opportunities for consumer input. And maintaining documentation about how an AI tool works, what parameters are used to produce its output, and what testing was performed to ensure accuracy and reliability would be helpful when it comes to meeting enhanced transparency requirements.