Colorado Enacts Artificial Intelligence Law
Colorado became the first state to adopt a comprehensive AI framework when Gov. Jared Polis signed Senate Bill 205. The law, unlike the EU AI Act, does not ban certain uses of AI. Instead, Colorado focused on accountability; the law adds guardrails designed to prevent discrimination from certain high-risk AI uses and imposes transparency obligations for companies that use/create those tools. But it is not all bad news for companies navigating this fluid field: the law is delayed until February 2026, it is enforced exclusively by the attorney general, and there are strong safe harbors (both rebuttable presumptions and an affirmative defense).
The law primarily regulates activities concerning high-risk AI systems, but there is also a transparency obligation for companies using any AI system to interact with consumers. The law applies to a company that does business in Colorado and either (1) creates/modifies a high-risk AI system (“developer”) or (2) uses such a system (“deployer”). Most of the obligations apply even if the AI system is not used in Colorado. So, companies cannot avoid the law merely by refusing to sell high-risk AI systems to Colorado companies or refraining from using such systems in the state.
Key Terms
With the exception of “developer” and “deployer,” the key terms are:
- “Artificial intelligence system,” which is defined in a way similar to Executive Order 14110, the EU AI Act, and the OECD AI principles—i.e., referring to a machine-based system that infers from the inputs it receives how to generate outputs, including content, decisions, predictions, or recommendations.
- “High-risk artificial intelligence systems,” which are AI systems that, when deployed, either make or are a substantial factor in making a consequential decision.
- “Consequential decision,” which means a decision that has a material legal or similarly significant effect on the provision, denial, cost, or terms of (1) housing, (2) insurance, (3) legal services, (4) health care services, (5) financial or lending services, (6) essential government services, (7) employment or employment opportunities, or (8) education enrollment or education opportunities.
- “Algorithmic discrimination,” which refers to unlawful differential treatment or impact that disfavors an individual or group based on certain protected characteristics (e.g., age, disability, race, or veteran status). But it does not include activities to increase diversity or redress historical discrimination.
Requirements
The requirements in the new law differ depending on where in the AI supply chain an organization sits. The developers (those who develop/modify high-risk AI systems) are subject to one set of requirements, while deployers (those who use such systems) are subject to a different set of requirements. Some of the requirements apply to both developers and deployers.
Reasonable Care
Both deployers and developers must use reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination.
Technical Information
Developers must provide, or make available, the following information to deployers using the high-risk AI system or other developers who are intentionally and substantially modifying that system:
- A description of the reasonably foreseeable uses and known harmful or inappropriate uses of the high-risk AI system;
- High-level summaries of the training data used as well as the data governance measures used to examine the suitability of the data;
- Information about the known or reasonably foreseeable limitations of the system as well as the risks of algorithmic discrimination and measures taken to mitigate such risks;
- The system’s purpose, intended benefits and uses, and intended outputs;
- How the system was evaluated for performance and mitigation of algorithmic discrimination;
- How the system should be used, not used, and monitored; and
- Any other documentation the deployer would need in order to comply with its own obligations as well as to understand the outputs and monitor the system’s performance.
Risk Management & Impact Assessments
Deployers must implement a risk-management policy and program to govern their use of high-risk AI systems and annually review the deployment of such systems to ensure they are not causing algorithmic discrimination. In addition, deployers must perform impact assessments of high-risk AI systems at least annually or within 90 days of making an intentional and substantial modification to the system. These impact assessments must cover:
- The purpose, intended use cases, and deployment context of, and benefits afforded by, the high-risk AI system;
- An analysis of whether the system poses known or reasonable foreseeable risks of algorithmic discrimination and, if so, the nature of the discrimination and the steps taken to mitigate such risks;
- The categories of data processed as inputs and the outputs of the system;
- Metrics used to evaluate the system’s performance and its known limitations;
- A description of the transparency measures taken to provide adequate notice to consumers; and
- Information about post-deployment monitoring, oversight, and user safeguards.
Consumer Notice
Both developers and deployers must make public statements. Developers must post on their website a statement summarizing the types of high-risk AI systems the developer has developed and how it manages known or reasonably foreseeable risks of algorithmic discrimination. Deployers must also post a notice identifying the types of high-risk AI systems they currently deploy; describe how they manage known or reasonably foreseeable risks of algorithmic discrimination; and detail the nature, source, and extent of information collected and used. Additionally, both deployers and developers must disclose to consumers that they are interacting with an AI system (not just high-risk AI systems) unless it is obvious to a reasonable person.
Deployers have additional disclosure obligations with respect to consumers. First, deployers of any high-risk AI system must provide pre-use notice before a consequential decision is made using that system. This notice must include information about the system’s purpose, the nature of the consequential decision, the deployer’s contact information, and instructions on how to access the public statement described above. In addition, where applicable, the deployer must inform the consumer of their Colorado Privacy Act right to opt-out of profiling in furtherance of decisions that produce legal or similarly significant effects. Second, deployers must provide post-use notice when the high-risk AI system results in a consequential decision that is adverse to the consumer. The notice must inform the consumer about (1) the principal reasons for such decision (including the degree to which the high-risk AI system contributed to the decision and the types/sources of data processed by the system in making the decision); (2) the opportunity to correct relevant personal information; and (3) the right to appeal the decision. [Notably, the correction right applies even if the deployer is not subject to the Colorado Privacy Act.]
Attorney General Notice
The law also includes disclosure obligations to the attorney general. Developers must notify the attorney general when they discover through testing that a high-risk AI system has caused or is reasonably likely to cause algorithmic discrimination or when a deployer has provided a credible report that the system has caused algorithmic discrimination. [The developer must also share that information with known deployers or other developers of that high-risk AI system.] A deployer must notify the attorney general when it determines that a high-risk AI system has caused algorithmic discrimination. These reports must be made without unreasonable delay but no later than 90 days.
Exceptions
There are robust exceptions—they make up nearly 25% of the bill. Some of the notable carve outs include (1) legal compliance, (2) legal claims, (3) physical safety, and (4) insurers who comply with certain AI restrictions codified elsewhere. There is also a limited exception for Health Insurance Portability and Accountability Act-covered entities: the law does not apply to their use of AI to generate treatment recommendations so long as the health care provider must take action to implement the recommendation and the usage is not “high risk.” Additionally, businesses with fewer than 50 full-time employees are exempt from the requirements concerning impact assessments, risk-management policies, and limited public disclosures when (1) their data is not used to train the high-risk AI system, (2) they use the system for its intended purpose, and (3) the deployer provided an impact assessment.
Rulemaking and Enforcement
The attorney general can, but is not required to, engage in rulemaking to flesh out the law.
In a notable deviation from the Colorado Privacy Act, the attorney general has exclusive enforcement authority—it is not shared with district attorneys. To facilitate enforcement, the attorney general can compel (1) developers to turn over documentation they shared with deployers or other developers and (2) deployers to share their risk management policy, impact assessments, and records concerning those assessments.
But the law throws a bone to companies by codifying some defense-friendly provisions. There is a rebuttable presumption that a company used reasonable care to protect consumers from risks of algorithmic discrimination if the company complied with the law and any rules promulgated by the attorney general. [But this is somewhat circular: A company gets the presumption of compliance only after establishing that they complied with the law.] There is also an affirmative defense to any purported violation for a company who (1) complied with NIST’s AI risk management framework (or a comparable framework) and (2) discovered/cured the violation based on soliciting feedback, conducting adversarial testing, or performing an internal review.
Steps to Take to Prepare
Companies should develop an AI inventory that, at a minimum, describes any uses of AI systems, whether such systems are developed internally or by a third party, and potential impacts to consumers from the use of such systems. This will aid in determining whether any of those uses would qualify as high-risk under Colorado’s law. Companies should also be working with cross-functional stakeholder teams to develop internal governance programs. This will position them to have the right teams at the table to help develop the necessary documentation and provide the required notices if they determine that this law applies to their AI activities.