AI Regulation: 2023 Recap and 2024 Outlook
2023 was an active year for AI regulation but 2024 looks to be even busier. Below we summarize the key regulatory developments from last year and try to predict what will come next.
In the U.S., the focus was on regulating specific uses of AI
The biggest news in the U.S. was the issuance of President Biden’s Executive Order 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The executive order focuses on federal government use of AI and includes directives to agencies including Commerce, Energy, Treasury, Homeland Security, Labor, Housing and Urban Development, and Health and Human Services to take action to assess and address the risks of using AI in the areas under their remit. But while the forthcoming rules, guidance and best practices will be targeting government use of AI, companies providing services to the government as well as private sector businesses in several industries may be impacted. The executive order also highlights areas that will likely be the subject of future legislation including, for example, ensuring fair competition, protecting workers from harms of AI use, advancing the research, development, and implementation of privacy enhancing technologies, and upholding civil rights.
At the state level, the Colorado Division of Insurance (CDI) adopted regulations governing life insurers’ use of big data systems—including external consumer data and information sources (ECDIS), algorithms and predictive models. The regulations are the first to be promulgated under the authority of SB 21-169, which was signed into law in 2021 and prohibits insurers from engaging in insurance practices that result in unfair discrimination based on a protected class. The new rules require assessments, remediation of unfair discrimination, and annual compliance reports to the CDI beginning December 1, 2024.
Colorado finalized regulations and California proposed regulations relating to certain types of profiling or automated decision-making that present specified risks. Profiling refers to any form of automated processing of personal data to analyze or make predictions about individuals and their specific situation, preferences or conduct. The Colorado rules focus on profiling with legal or similarly significant effects and require notice, the ability to opt-out, and performance of a risk assessment. The draft California rules focus on automated decision-making with legal or similarly significant effects, profiling of workers, job applicants, or students, and profiling consumers in publicly accessible places. They would require pre-use notice, the right to opt-out, and the ability to access information about the automated decision-making technology used.
Lastly, the Department of Consumer and Worker Protection in New York City issued rules for conducting bias audits of automated employment decision tools (AEDT). These rules were promulgated under the authority of NYC Law 2021/44, which took effect at the beginning of 2023 and requires employers using an AEDT to conduct an annual bias audit, publish information about such audit, and provide information to employees or job candidates about the use of the tool.
At the international level, we saw political agreement on AI regulation in the EU and specific measures for Generative AI in other countries
The most significant international development in 2023 was the political agreement reached by the European Commission, European Parliament, and the Council of the European Union on the AI Act, which is a product safety regulation designed to address the potential risks of using AI. Although the final text is not yet available, we know from earlier drafts and the negotiation points that the Act will take a tiered approach to regulation, imposing the most stringent requirements on high-risk AI systems and limited obligations for AI systems with minimal risk. There will also be a category of prohibited AI uses, which are banned outright because of their unacceptable risk. Once the final text is published, it must be endorsed by the Member States and confirmed by the Parliament and Council. It is therefore likely that most of these requirements will not take effect until 2026.
Elsewhere, China’s Cyberspace Administration issued Interim Measures for the Management of Generative Artificial Intelligence Services requiring providers of such services to, among others, obtain consent for processing personal data; enter into service agreements with users; block or remove illegal content; and appropriately label data that is generated by the service. In Canada, the Minister of Innovation, Science and Industry announced a Voluntary Code of Conduct for the Responsible Development and Management of Advanced Generative AI Systems. The code is based on the following principles: accountability; safety, fairness and equity; transparency; human oversight and monitoring; and validity and robustness. Currently, 19 companies have become signatories.
In 2024 we can expect regulation that focuses on key governance principles
Indicators from last year’s regulatory activity are that protecting individuals from harms of AI use will be the main focus. So, any use of AI that has material impacts on individuals or that wholly replaces human decision-making will be the target of regulation. Whether that happens through use-case or industry specific requirements (like we saw in the U.S.) or more broadly through product safety obligations (like in the EU) remains to be seen. But we can say that future regulatory activity will focus on principles that include accountability, transparency and explainability, fairness and safety, accuracy, and human oversight.
For companies, this means taking a close look at adoption of AI
In order to be ready for future regulation, companies should document their use of AI in an inventory. This will facilitate categorization of AI uses based on risk and identification of uses that may have material impacts on individuals. With this information, companies can conduct risk assessments where appropriate, update notices and disclosures, and implement opt-out procedures. An inventory will also be the first step in determining which requirements apply when the laws and regulations for AI evolve.
In the U.S., the focus was on regulating specific uses of AI
The biggest news in the U.S. was the issuance of President Biden’s Executive Order 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The executive order focuses on federal government use of AI and includes directives to agencies including Commerce, Energy, Treasury, Homeland Security, Labor, Housing and Urban Development, and Health and Human Services to take action to assess and address the risks of using AI in the areas under their remit. But while the forthcoming rules, guidance and best practices will be targeting government use of AI, companies providing services to the government as well as private sector businesses in several industries may be impacted. The executive order also highlights areas that will likely be the subject of future legislation including, for example, ensuring fair competition, protecting workers from harms of AI use, advancing the research, development, and implementation of privacy enhancing technologies, and upholding civil rights.
At the state level, the Colorado Division of Insurance (CDI) adopted regulations governing life insurers’ use of big data systems—including external consumer data and information sources (ECDIS), algorithms and predictive models. The regulations are the first to be promulgated under the authority of SB 21-169, which was signed into law in 2021 and prohibits insurers from engaging in insurance practices that result in unfair discrimination based on a protected class. The new rules require assessments, remediation of unfair discrimination, and annual compliance reports to the CDI beginning December 1, 2024.
Colorado finalized regulations and California proposed regulations relating to certain types of profiling or automated decision-making that present specified risks. Profiling refers to any form of automated processing of personal data to analyze or make predictions about individuals and their specific situation, preferences or conduct. The Colorado rules focus on profiling with legal or similarly significant effects and require notice, the ability to opt-out, and performance of a risk assessment. The draft California rules focus on automated decision-making with legal or similarly significant effects, profiling of workers, job applicants, or students, and profiling consumers in publicly accessible places. They would require pre-use notice, the right to opt-out, and the ability to access information about the automated decision-making technology used.
Lastly, the Department of Consumer and Worker Protection in New York City issued rules for conducting bias audits of automated employment decision tools (AEDT). These rules were promulgated under the authority of NYC Law 2021/44, which took effect at the beginning of 2023 and requires employers using an AEDT to conduct an annual bias audit, publish information about such audit, and provide information to employees or job candidates about the use of the tool.
At the international level, we saw political agreement on AI regulation in the EU and specific measures for Generative AI in other countries
The most significant international development in 2023 was the political agreement reached by the European Commission, European Parliament, and the Council of the European Union on the AI Act, which is a product safety regulation designed to address the potential risks of using AI. Although the final text is not yet available, we know from earlier drafts and the negotiation points that the Act will take a tiered approach to regulation, imposing the most stringent requirements on high-risk AI systems and limited obligations for AI systems with minimal risk. There will also be a category of prohibited AI uses, which are banned outright because of their unacceptable risk. Once the final text is published, it must be endorsed by the Member States and confirmed by the Parliament and Council. It is therefore likely that most of these requirements will not take effect until 2026.
Elsewhere, China’s Cyberspace Administration issued Interim Measures for the Management of Generative Artificial Intelligence Services requiring providers of such services to, among others, obtain consent for processing personal data; enter into service agreements with users; block or remove illegal content; and appropriately label data that is generated by the service. In Canada, the Minister of Innovation, Science and Industry announced a Voluntary Code of Conduct for the Responsible Development and Management of Advanced Generative AI Systems. The code is based on the following principles: accountability; safety, fairness and equity; transparency; human oversight and monitoring; and validity and robustness. Currently, 19 companies have become signatories.
In 2024 we can expect regulation that focuses on key governance principles
Indicators from last year’s regulatory activity are that protecting individuals from harms of AI use will be the main focus. So, any use of AI that has material impacts on individuals or that wholly replaces human decision-making will be the target of regulation. Whether that happens through use-case or industry specific requirements (like we saw in the U.S.) or more broadly through product safety obligations (like in the EU) remains to be seen. But we can say that future regulatory activity will focus on principles that include accountability, transparency and explainability, fairness and safety, accuracy, and human oversight.
For companies, this means taking a close look at adoption of AI
In order to be ready for future regulation, companies should document their use of AI in an inventory. This will facilitate categorization of AI uses based on risk and identification of uses that may have material impacts on individuals. With this information, companies can conduct risk assessments where appropriate, update notices and disclosures, and implement opt-out procedures. An inventory will also be the first step in determining which requirements apply when the laws and regulations for AI evolve.