Year in Review: Privacy Compliance and Artificial Intelligence Developments

With 2024 coming to an end, it is time to dust off the quill and ink to put together a recap on privacy and artificial intelligence (AI) developments. So, in the spirit of efficiency:
 

Prompt to [Insert Preferred Generative AI Here]: What happened in 2024 in the AI and privacy worlds that in-house counsel needs to know?
 
Response: 2024 saw rapid, significant changes in the AI and privacy fields. Although the law struggled to keep up, legislators passed significant legislation that will impact how companies can use AI and what they need to do with personal information. In-house counsel should also be aware that regulators started to become more active in policing AI and privacy issues. 

OK, we didn’t actually have a chatbot write that—or this client alert—but 2024 was a year of significant AI and privacy changes. We highlight the notable developments below, and we will be coming to your inbox soon with our complementary piece addressing the issues to watch in 2025. 

Artificial Intelligence

For our 2024 AI wrap-up, we’re going to focus on the U.S. and EU but note that other jurisdictions like Brazil, China, and the UK also saw significant activity towards regulating AI.
 
United States
 
At the federal level, we saw a lot of hurry up and wait: More than 120 bills were introduced in Congress, but not many gained traction. And those that had movement typically stalled at the committee level. 

At the state level, 2024 saw 45 states introduce a total of nearly 700 bills (with 31 states enacting legislation) on various AI topics. This is more than a 300% increase from 2023 (when fewer than 200 bills were introduced). The bills introduced typically fell under one of four categories: private use, government use, deepfake generation, and election impact. 

We focus here on the first category, private uses of AI. Four states led the way: California, Colorado, Utah, and Illinois.

In California, the legislature passed four notable bills, one of which was vetoed by the governor:

  • Training Data Disclosures. AB 2013 requires developers of generative AI systems to post information on their website about the training data used for models released after 2021. 
  • Watermarks. AB 942 requires developers of generative AI systems to mark AI-generated content, make available a free AI detection tool, and allow users to include an easily understood disclosure identifying content as AI generated. 
  • Personal Information Definition. AB 1008 amends the California Consumer Privacy Act’s (CCPA) definition of personal information to include information in “abstract digital formats” such as “artificial intelligence systems that are capable of outputting personal information.” This is controversial as the change seemingly flies in the face of the prevailing view that AI systems do not actually store personal information. Given the tension between that view and the revised CCPA definition, companies will be in a tough spot when it comes to data subject requests.
  • Security Requirements. SB 1047 would have required developers of the largest AI models to adopt a written safety and security protocol and implement a “kill switch.” But Gov. Gavin Newsom vetoed the bill because, in his view, it did not take into account the context in which a system is being deployed and was too narrowly focused on certain sizes of systems, which are not the only ones that could result in catastrophic harm.

In Colorado, the legislature passed the nation’s first “comprehensive” AI law—SB24-205. Although called a comprehensive law, it has a limited focus: addressing (already unlawful) discrimination. It does so by regulating high-risk AI systems, which are systems used to make consequential decisions relating to the provision or denial of certain material services (e.g., employment, lending, healthcare, and housing). With respect to such systems, companies have transparency obligations and accountability requirements (such as using reasonable care and conducting risk assessments) that vary based on whether the entity is creating or using the AI. To learn more about this law, please check out our client alert, article, and webinar.

Unlike California’s scattershot approach and Colorado’s “comprehensive” law, Utah exclusively focused on transparency. The state’s SB 149 requires people engaged in certain regulated occupations and consumer-focused activities to notify consumers when they are interacting with an AI system (e.g., chatbot or AI phone assistant). 

Lastly, Illinois passed HB 3773, which applies to employers using AI for employment decisions—such as recruiting, hiring, discipline, and terms of employment. When using AI for such purposes, an employer (1) must notify the employee and (2) cannot discriminate based on protected classes or rely on zip codes as a proxy for race.

Aside from the flurry of state bills on AI, regulators at the federal and state levels were busy pursuing companies for unfair or deceptive practices relating to their use—or public statements about their use—of AI. Below are a few examples:

  • The U.S. Federal Trade Commission (FTC) cracked down on deceptive AI claims and schemes as part of its “Operation AI Comply.” The commission took action against a company promoting an AI tool that enabled its customers to create fake reviews (Rytr), a company selling “AI Lawyer” services (Do Not Pay), and multiple companies claiming that they could use AI to help consumers make money through online storefronts (Ascend Ecom, Ecommerce Empire Builders, and FBA Machine).
  • The FTC also used its enforcement authority to require Avast and X-Mode Social to destroy the models/algorithms they trained using improperly collected data. The U.S. Securities and Exchange Commission fined two investment advisers for making false and misleading statements about their use of AI.
  • The Texas Attorney General settled with Pieces Technology over allegations that the company made false and misleading statements about the accuracy and safety of an AI tool that it offered to healthcare providers for summarizing patient records.

Aside from the legislative and enforcement activity at the federal level, it is worth mentioning that federal agencies were busy implementing measures to achieve the mandates set for them in President Joe Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (E.O. 14110). On the one-year anniversary of the order (October 30th), the White House published a fact sheet showing successful progress to date on the implementation of accountability measures and issuance of guidance, standards, and best practices by federal agencies.

European Union

The most discussed development in the EU is the entry into force on August 1, 2024, of the EU AI Act—the world’s first fully comprehensive AI legislation (more so than Colorado’s law). The EU AI Act groups AI uses into four main categories, each with their own requirements that take effect over the next three years:

  • Prohibited Use. A prohibited use is a specified activity deemed to pose unacceptable risk, such as conducting social scoring, exploiting vulnerabilities based on specified characteristics (e.g., age or disability), and creating facial recognition databases through untargeted scraping. These uses are banned starting February 2025.
  • General Purpose AI Model (GPAI Model). GPAI Models are trained with large amounts of data using self-supervision at scale and display significant generality. Starting August 2025, all GPAI Model providers must provide technical documentation, a compliance policy, and training content. But there are enhanced obligations, such as testing and security, for providers of GPAI Models that can have significant negative effects in areas such as safety, security and public health. 
  • Interactive, Generative, and Categorization Systems. The Act imposes transparency obligations, such as disclosing the use of AI or marking AI-generated content, for AI systems that: interact with a consumer, generate content, recognize emotions, or categorize biometric data. These obligations, which vary based on whether the company is creating or using the system, take effect in August 2026. 
  • High-Risk AI System. These systems include AI used in either (1) certain areas (such as education, employee management, and certain biometric activities) or (2) products, or components of a product, regulated by EU product safety laws. Those who use, create, import, or export such AI are subject to a variety of requirements, including: registering in an EU database, creating a risk management system, and performing incident monitoring. These rules take effect in August 2027 when the AI is high risk because it is regulated by product safety laws and August 2026 for all other high-risk AI systems.

Privacy Compliance 

Regulators target cookie usage 

Cookies continued to be the belle of the ball for federal and state regulators. While the U.S. Department of Health and Human Services’ (HHS) Office for Civil Rights largely struck out in its effort to wrangle cookies into the definition of protected health information, state regulators picked up the baton and set parameters for appropriate cookie usage. The New York Attorney General rattled cages by suggesting cookie management could give rise to a claim under the state’s consumer protection law (which has a private right of action). But that stick was paired with a carrot: a series of practical tips for cookie management that align with principles embedded into other states’ comprehensive privacy laws. Some of the notable suggestions: verify that cookies are properly categorized for a cookie-management platform, avoid an “accept all” button if cookies are deployed when a user first visits the page, and confirm that user preferences are applied regardless of tracking technology (e.g., consider non-cookie trackers).  

State privacy legislation ramps up while federal proposal withers on the vine

Comprehensive privacy laws continue to remain an exclusively state creature. Congress considered a comprehensive federal privacy bill, but it never even got a committee vote. [We covered that bill in an earlier client alert.] But it wasn’t all dour news at the federal level: the Kids Online Safety and Privacy Act—an update to the Children’s Online Privacy Protection Act (which passed in 1998)—has bipartisan momentum. The Senate version garnered 91 votes, and the House is working on an updated version. [For a primer on the Senate’s version, check out the Congressional Research Service’s overview.]

State privacy, on the other hand, continues like a runaway train. The 2024 legislative season saw seven states adopt new laws (bringing us up to 19), so just a brief word on each:

  • Kentucky. Tracks the Virginia model
  • Maryland. Imposes significantly tougher data minimization requirements
  • Nebraska. Copies Texas’ law (including applying the law regardless of processing volume)
  • New Hampshire. Follows Connecticut law
  • New Jersey. Mandates comprehensive rulemaking (joining only California and Colorado)
  • Minnesota. Requires maintaining data inventory and documenting applicable internal policies
  • Rhode Island. Adds enhanced disclosures for sales but provides no cure period or data minimization requirement 

Vermont passed a bill with a private right of action, but the governor vetoed it. We also saw some tweaks to existing state laws, including California and Colorado adding neural data to the definition of personal data and Colorado adding biometric protections that even cover employees. Colorado also updated its regulations to add the ability to issue guidance and advisory opinions (which can be used to establish a good-faith-reliance defense in limited situations). Oh, and Illinois reined in Biometric Information Privacy Act damages by clarifying repeated unlawful collection is just a single violation of the law. [We covered that topic in a client alert].
 
California kicks off formal rulemaking (again)

California kicked off formal rulemaking on five topics: insurance, risk assessments, cybersecurity audits, automated decisionmaking technology, and updates to existing rules. [We covered the proposals in a December 2023 client alert and November 2024 follow-up article.] The scope and impact of these rules is not to be understated: the California Privacy Protection Agency itself estimated the cost at nearly $10 billion over 10 years. Public comments are welcomed through January 14, although it is likely that there will be revisions and another round of comments.
 
Regulators ramp up privacy enforcement 

2024 saw regulators out in force in the privacy space, with a keen focus on sales, minors’ data, and location information. 
 
State Regulators

California notched settlements with DoorDash and Tilting Point Media regarding unauthorized sales of personal information, and the state also issued guidance on the need for data minimization when assessing data subject requests. Connecticut issued cure notices to companies on issues such as defective disclosures and problematic consent processes. But nobody was more active than Texas, and the Lone Star state did not just limit itself to its new comprehensive privacy law. In addition to investigating—and reaching a $1.4 billion settlement over—the alleged unauthorized processing of biometric data, the state also:

  • Launched an investigation into 15 companies (including big names such as Reddit and Discord as well as Character.ai, a popular AI-driven chatbot) for their handling of minors’ data
  • Sued General Motors for selling location data and TikTok for sharing minors’ data 
  • Investigated a data broker, National Public Data, in connection with a data breach 
  • Issued cure notices based on alleged violations of the state’s comprehensive privacy law, including failing to disclose (and obtain consent for) the processing of precise location data

The last bullet should perhaps be the most concerning for companies: the cure notices became public through a public records request—something we haven’t seen in other states. 

Federal Regulators

Federal regulators also picked up the pace. The FTC continued to police privacy practices with a particular emphasis on sensitive data. To that end, the commission: 

  • Reached settlements with InMarket, Mobilewalla, and X-Mode Social regarding their processing of precise location data
  • Scored a victory in court against Kochava on the theory that selling location data can be an unfair practice
  • Staked out the position that browsing history is sensitive data in a settlement with Avast

Not to be outshined, HHS put on a clinic when it comes to end-of-the-year enforcement activity. Out of HHS’ 14 settlements or enforcement actions this year, 10 came since the start of September. The actions largely targeted breaches (and associated Privacy or Security Rule issues), but HHS also found time to deal with a dentist and mental health provider who waited an unreasonable amount of time to address patients’ record requests. HHS generally paired a remedial action plan with a fine—with the department ultimately bringing in more than $9 million. But a majority of that figure ($4.75 million) is tied to one settlement: a medical center that failed to detect an insider accessing over 12,000 patient records and selling details to an identity theft ring.  

Conclusion

While 2024 was busy, indications are that 2025 will be even busier, both on the legislative and regulatory fronts. We will cover what to look for in our upcoming “Looking Ahead” client alert.