Colorado Enacts AI Consumer Protections

The Colorado General Assembly passed Senate Bill 205. The bill was designed to protect consumers by restricting the use of artificial intelligence (AI) systems that are considered high-risk. It is the first law in the nation to require AI developers to “use reasonable care to avoid algorithmic discrimination.”

Colorado Governor Jared Polis (D) signed the bill but said he has reservations about the law. In a letter he sent the General Assembly, Polis said he is “concerned about the impact this law may have on an industry that is fueling critical technological advancements across our state for consumers and enterprises alike.” Polis encouraged lawmakers to reexamine the law before it goes into effect on February 1st, 2026.

Key Definitions

To recognize how this bill could impact your organization, it’s important to understand some of its key definitions.

Algorithmic Discrimination – Any condition in which the use of an artificial intelligence system results in an unlawful differential treatment of impact that disfavors an individual or group of individuals based on their actual or perceived age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, veteran status, or other classification protected under the laws of this state or federal law.

High-risk Artificial Intelligence System – Any AI system that, when deployed, makes, or is a substantial factor in making a consequential decision.

Consequential Decision – A decision that has a material, legal, or similarly significant effect on the provision or denial to any consumer of, or the cost or terms of:

  • Education enrollment or opportunity
  • Employment opportunity
  • Financial or lending services
  • Essential government services
  • Healthcare services
  • Housing
  • Insurance
  • Legal services

AI Developer Requirements

The bill states “A developer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses of the high-risk artificial intelligence system.”

It requires developers to make available to the deployer:

  • A general statement describing the reasonably foreseeable uses and known harmful or inappropriate uses of the high-risk AI system.
  • Documentation describing:
    • How the high-risk artificial intelligence system was evaluated for performance and mitigation of algorithmic discrimination before the high-risk artificial intelligence system was offered, sold, leased, licensed, given, or otherwise made available to the deployer.The data governance measures used to cover the training datasets and the measures used to examine the suitability of data sources, possible biases, and appropriate mitigation.The intended outputs of the high-risk AI system.The measures the developer has taken to mitigate known or reasonably foreseeable risks of algorithmic discrimination that may arise.
    • How the high-risk AI system should be used, not be used, and monitored by an individual when the high-risk AI system is used to make or is a substantial factor in making a consequential decision.

The developer must make a statement summarizing the following available to the public, via its website or in a public use case inventory:

  • The types of high-risk artificial intelligence systems that the developer has developed or intentionally and substantially modified and currently makes available to a deployer or other developer.
  • How the developer manages known or reasonably foreseeable risks of algorithmic discrimination that may arise.
  • The statement must be updated no more than 90 days after any substantial modifications.

The developer must provide the following information to the Colorado Attorney General no more than 90 days after its discovery:

  • The developer’s ongoing testing and analysis finds that the developer’s high-risk AI system has been deployed and has caused or is reasonably likely to have caused algorithmic discrimination.
  • The developer receives from a deployer a credible report that the high-risk artificial intelligence system has been deployed and has caused algorithmic discrimination.

AI Deployer Requirements

The bill places the following requirements on deployers of high-risk AI systems:

  • Implement a risk management policy and program to govern the deployer’s release of the high-risk AI system. The NIST AI RMF is an example of an acceptable framework.
  • Complete an impact assessment for the high-risk artificial intelligence system at least annually and within 90 days of a substantial modification.
  • Notify the consumer about the deployment of an AI system before a consequential decision is made. The consumer must also be made aware of the purpose of the high-risk artificial intelligence system and the nature of the consequential decision.
  • The consumer must be given the ability to opt out of the processing of personal data and correct any incorrect personal data.
  • Provide the consumer with an opportunity to appeal, via human review if technically feasible, an adverse consequential decision concerning the consumer arising from the deployment of a high-risk AI system.
  • Make a publicly available statement summarizing:
    • The types of high-risk AI systems currently deployed
    • How risks are managed
    • The nature, source, and extent of data collected and used
  • If an AI system has caused algorithmic discrimination, the deployer must notify the Attorney General within 90 days of the discovery.


The Attorney General has exclusive authority to enforce violations of the law.

The bill does not restrict a developer’s or deployer’s ability to engage in the following activities:

  • Complying with federal, state, or municipal laws, ordinances, or regulations.
  • Cooperating with and conducting specified investigations.
  • Taking immediate steps to protect an interest that is essential for the life or physical safety of a consumer.
  • Conducting and engaging in specified research activities.

The bill provides an affirmative defense for a developer or deployer if:

  • They are compliant with a nationally or internationally recognized risk management framework for artificial intelligence system.
  • The developer or deployer takes specified measures to discover violations of the bill.

CompliancePoint has helped organizations across various industries comply with privacy and cybersecurity laws and frameworks. To learn more about how our services can help your business, contact us at

Finding a credible expert with the appropriate background, expertise, and credentials can be difficult. CompliancePoint is here to help.