Source - Privacy and Cybersecurity Client Alert

The Year Ahead in Artificial Intelligence

2025 is set to see an even greater level of activity in the AI governance space. Below are three areas practitioners should focus on.

1. Expanded comprehensive AI laws at the state level

Colorado will not be the only state with a comprehensive AI law by the time 2025 comes to a close. Early proposals for this upcoming legislative session are building on the Colorado model (which itself is on track for a series of updates) but adding their own twists. These include:

  • Massachusetts HD 396 – requires corporations operating in Massachusetts that use AI systems to target specific consumer groups or influence behavior to disclose (i) the purpose for using such tools, (ii) the ways they are designed to influence consumer behavior, and (iii) details on third parties involved in the design, deployment or operation of such tools.
  • New Mexico HB 60 – creates a private right of action against a developer or deployer for declaratory or injunctive relief (as well as attorneys’ fees) for violations.
  • New York A768 – allows companies to rely on a rebuttable presumption that they exercised reasonable care to avoid algorithmic discrimination when they engage third parties approved by the attorney general to complete bias and governance audits.
  • Texas HB 1709 – adds a new category of regulated entity—distributors, which make an AI system available on the market for a commercial purpose—and requires them to use reasonable care to protect consumers from algorithmic discrimination, including withdrawing an AI system from the market if the distributor knows or has reason to know that the system is in non-compliance. The bill also prohibits certain AI uses, including social scoring and categorization based on sensitive attributes.
  • Virginia HB 2094 – adds yet another regulated entity—integrators, which knowingly integrate an AI system into a software application and place such software application on the market—and requires them to adopt an acceptable use policy for the purposes of mitigating known risks of algorithmic discrimination and make certain disclosures to deployers, including information about how the integrator modified the AI system. The bill also requires developers of generative AI systems to “watermark” outputs.

We expect more bills like these (and some that introduce their own curveballs). But it is too soon to tell whether Congress will step in and preempt an emerging patchwork of state laws.
  
2. Continued regulatory guidance and enforcement 

As discussed in our 2024 recap, there was significant activity by regulators enforcing existing rules against companies engaging in discriminatory conduct or unfair or deceptive practices relating to their use of AI. This will undoubtedly continue, as regulators at the state and federal levels have been quick to point out that they don’t need AI-specific legislation to protect consumers. Companies moving to embed AI into their business practices should look to regulators and their enforcement activities for guidance on what constitutes appropriate uses of AI. Based on that activity to date, the three key principles to focus on are transparency, accountability and fairness to individuals.

3. Internal accountability measures

Companies that are working towards documenting their specific use cases will be better positioned to both comply with existing requirements and address new obligations coming down the pipeline. Although the comprehensive laws referred to above focus on high-risk use cases—those that have material impacts on consumers in certain situations—there may be other uses that will be subject to regulation. Therefore, companies that have their use cases identified will be better positioned to ensure they comply with both existing and new legal requirements.