Virginia’s Artificial Intelligence Bill Nears the Finish Line
Déjà vu all over again: after Colorado and Virginia established competing standards for comprehensive privacy laws in 2021, history appears to be repeating itself in the artificial intelligence (AI) space. Virginia’s legislature passed an AI bill (HB 2094) that is—in many respects—the business-friendly analog to the more demanding Colorado AI Act. Both pieces of legislation, however, stop short of the comprehensive regulation we see with the European Union’s AI Act. The states (technically, one state and one commonwealth), instead, have a much narrower focus: preventing (already unlawful) discrimination by a high-risk AI system (HRAIS) and imposing transparency obligations on companies who use/create those systems.
We dive into the two approaches below by summarizing the Colorado law and then highlighting key differences in the Virginia bill. [For more details on Colorado’s law, check out our earlier client alert.] Spoiler: the Virginia bill has a narrower scope, broader exemptions, and fewer obligations—and then tops it off with a right to cure.
Scope and Definitions
The Colorado AI Act applies to a company that does business in Colorado and either (1) creates or substantially modifies an HRAIS (developer) or (2) uses such a system (deployer). [Notably, there is no requirement that the HRAIS is used by, or sold to, a company in Colorado.] The law targets algorithmic discrimination resulting from an HRAIS: an AI that makes, or is a substantial factor in making, a consequential decision affecting a Colorado resident. Let’s break down each of those terms. Algorithmic discrimination is the use of an AI system that results in unlawful differential treatment or impact that disfavors individuals based on certain characteristics (e.g., age, color, race). A consequential decision is a decision with a material legal or similarly significant effect on the cost, terms, denial, or provision of certain services/opportunities (e.g., education, employment, or essential government services). And a substantial factor is a factor that is generated by AI, “assists in” making a consequential decision, and can alter the outcome of that decision.
The law will take effect on February 1, 2026. [Although there are rumblings that Colorado is mulling pushing back the start date to February 2027.]
Virginia adopts the same general approach but delays the effective date to July 2026 and generally narrows the scope:
- Excludes employees. Applies only to an HRAIS affecting individuals in their personal capacity (not as employees).
- Limits coverage for AI changes. Adds carveouts that make it less likely a company will qualify as a developer based on changes they make to an AI system.
- Requires connection between the HRAIS and the Commonwealth. Applies only to deployers making decisions in the Commonwealth and developers developing (or substantially modifying) an HRAIS offered, sold, or otherwise made available in Virginia.
- Narrows what is a “high risk” activity. Limits HRAIS to an AI system that is “specifically intended” to make, or be a substantial factor in making, a consequential decision. The bill also exempts certain technologies (e.g., cybersecurity) and AI systems used for autonomous vehicles.
- Modifies what is a consequential decision. Excludes decisions affecting cost, terms, or essential government services, while adding protection for decisions affecting the provision/denial of parole, pardon, probation, marital status, or release from incarceration.
- Narrows what is a substantial factor. Requires the AI output is used to make the decision without human involvement or meaningful consideration by a human (contrast to Colorado just requiring the output is a “factor” in the decision).
- Requires pro-consumer construction. Adds instruction to broadly construe consumer rights/protections and narrowly construe exemptions.
Developer Obligations
Under the Colorado AI Act, a developer’s obligations fall into three buckets: reasonable care, transparency, and notifications. First, a developer must use reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination. Second, the developer must share details about their HRAIS. The transparency duty has two components: (1) making publicly available certain details (e.g., what types of HRAIS they developed and how they manage known or reasonably foreseeable risks) and (2) providing information to the developer (e.g., summarizing the training data, describing harmful/inappropriate uses of the system, and explaining the system’s purpose and intended outputs). Third, the developer must notify deployers and the attorney general of known or reasonably foreseeable risks of algorithmic discrimination that the developer identifies after distributing the HRAIS.
Virginia’s proposal varies in a few notable ways:
- Limits transparency requirements. Removes requirement to provide public disclosures, share details about known harmful/inappropriate uses, or share information about “reasonably foreseeable uses.”
- Eliminates the deployer/regulator notice obligation. Removes requirement to notify developers or the attorney general of algorithmic-discrimination risks that are identified after distributing the HRAIS.
Deployer Obligations
The Colorado AI act saddles deployers with more obligations than developers. Don’t let the similar categories fool you—reasonable care, risk management, transparency, and notifications—the actual requirements are more robust than what developers must do. First, a deployer must use reasonable care to prevent algorithmic discrimination. Second, a deployer must manage risks by (1) creating a reasonable risk-management policy that is regularly reviewed/updated, (2) conducting annual, if not more frequently, impact assessments (e.g., assessing the purpose, risks, and outputs), and (3) reviewing annually whether their HRAIS is causing algorithmic discrimination. Third, deployers have robust transparency obligations—they must provide details to consumers before using the HRAIS, give more information after the HRAIS results in an adverse decision, and make certain details available on a public-facing website. Fourth, deployers have a notification obligation: upon discovering their HRAIS is causing algorithmic discrimination, the deployer must inform the attorney general. Colorado, however, offers small businesses meeting certain conditions an exemption from the requirements for a website disclosure, an impact assessment, and a risk-management policy.
Virginia’s model differs in key respects:
- Streamlines risk management policy. Eliminates requirement for establishing iterative process and creates more demanding reasonableness inquiry by limiting consideration to recognized standards/frameworks (e.g., no consideration of the data or the deployer’s size).
- Creates more demanding (but less frequent) impact assessment. Requires analysis of HRAIS validity/reliability and eliminates obligation for annual impact assessments (instead, an assessment must be completed before deployment or a significant update).
- Expands consumer notice. Adds requirement to notify consumer of any consequential decision (even if not adverse) without undue delay.
- Limits consumer rights. Restricts right to correct data to situations already covered by the Commonwealth’s privacy law and limits notice/appeal rights to adverse decisions made using personal data the consumer did not provide.
- Lightens public notice burdens. Expands publication options (not limited to a website) and eliminates duty to identify HRAIS or disclose input details. Also, there is no duty to notify the attorney general after determining an HRAIS caused algorithmic discrimination.
- Eliminates annual review. Removes requirement to annually review whether the HRAIS is causing algorithmic discrimination.
- Eliminates exemption. Removes limited exemption for small businesses.
Rulemaking and Enforcement
The Colorado AI Act provides for rulemaking by the state attorney general, who is also the sole enforcer of the law—there is no private right of action. In support of that power, the attorney general can require that a company provide certain records (e.g., developer’s documentation or deployer’s policies) to assess the company’s compliance with the law—and that power is not predicated on the existence of an active investigation or suspicion of noncompliance. Any violation of the Colorado AI Act is considered an unfair trade practice with penalties of up to $20,000 per violation (or up to $50,000 per violation against an elderly person).
Virginia, unsurprisingly, deviates somewhat from the Colorado approach:
- Eliminates formal rulemaking. Provides no explicit authority to write rules, but the bill contemplates the attorney general specifying approved risk management frameworks.
- Modifies requests for information. Eliminates fishing expeditions by limiting when the attorney general can request information and permits companies to withhold data in certain situations.
- Adds a discretionary right to cure. Requires pre-lawsuit consultation between the attorney general and companies on whether a cure is possible and, if it is, allows the attorney general to provide a 45-day cure period (if warranted based on prescribed factors).
- Lowers maximum penalty. Reduces the maximum penalty to $1,000 plus fees/costs (except the maximum penalty rises to $10,000 for willful violations).
Defenses and Presumptions
The Colorado AI Act offers companies both an affirmative defense and a presumption of compliance under certain conditions. A company is entitled to a rebuttable presumption that they used reasonable care if they complied with the law and any regulations. [As noted in an earlier client alert, this seems a bit circular: Who needs a presumption if the company already complied with the requirement to use reasonable care?] Companies also can assert an affirmative defense when they (1) complied with an acceptable AI risk management framework and (2) discovered and cured the violation based on soliciting feedback, conducting adversarial testing, or performing an internal review.
Virginia adds its own color to the presumptions and defense options.
- Expands compliance presumption. Provides presumption of compliance for any requirement addressed by a corresponding provision in an AI framework used by the company.
- Modifies affirmative defense. Removes requirement to show compliance with an AI framework but conditions the defense on a company, within 45 days of discovering a violation, (1) curing the violation and (2) notifying the attorney general (and sharing with them any evidence of harm mitigation).
Exemptions
The Colorado AI act has a laundry list of exemptions. Many of them track what we see in the state’s comprehensive privacy law—comply with laws or legal process, cooperate with law enforcement, defend/exercise legal claims, etc. But there are a few notable twists. First, companies can freely use an HRAIS to address fraud or similar illegal activity so long as the HRAIS does not involve facial recognition. Second, medical professionals dodge the law’s requirements when they use an HRAIS to generate recommendations—which cannot be high risk—when the health care provider must take steps to implement the suggestion. Third, insurers are not exempt from the law but are deemed to be compliant with the Colorado AI Act while complying with the state’s insurance rules on AI.
Stop me if you have heard this one before—but Virginia has its own spin on exemptions:
- Broadens fraud/crime exemption. Allows exempted anti-fraud/crime to include uses of facial recognition.
- Expands healthcare exemption. Covers healthcare entities (1) making high-risk recommendations; (2) facilitating telehealth; and (3) using an HRAIS for security, administration, quality measurement, or internal performance improvements.
- Exempts insurers. Adds an exemption for insurers—and HRAIS used for insurance purposes—if the insurer’s regulator can conduct an examination on discrimination.
- Adds new exemptions. Excludes companies using an HRAIS to (1) provide a product or service requested by a consumer, (2) perform a contract involving the consumer, or (3) conduct internal operations reasonably aligned with the consumer’s expectations or reasonably anticipated by the consumer.
Other Requirements
Both Colorado and Virginia used their AI legislation to add obligations that are disconnected from the general thrust of addressing algorithmic discrimination. Colorado included a provision that appears to require deployers disclose when they use an AI-powered chatbot. Virginia didn’t copy that provision, but it tacked on its own twist: a watermarking obligation for generative AI. A developer of an HRAIS that generates or substantially modifies generated content must make that synthetic content identifiable using industry-standard tools or tools provided by the developer. But the bite of that provision will be limited because it does not apply to text-based content or generated information that is unlikely to mislead a reasonable person.
Conclusion
We now wait to see what Governor Glenn Youngkin will do. [Business interest groups, such as the U.S. Chamber of Commerce, are already lining up against the bill.] After it is presented to him, he has seven days to act. He can sign the bill into law, veto the bill, or recommend changes. In the latter two cases, the bill goes back to the legislature for reconsideration.