top of page
Search
  • Anna Claire Trahan

Colorado Enacts First AI Laws!


One goal of this legislation is likely to force developers to provide disclosures to users of information regarding the creation of the system, benefits, intended outputs, methods, and safeguards taken to avoid algorithmic discrimination.

On May 17, 2024, Colorado became the first state to enact comprehensive legislation on artificial intelligence (AI). Colorado SB205 (Consumer Protections for Artificial Intelligence), requires developers and deployers of AI, to use “reasonable care” to protect against any known or reasonably foreseeable risks of “algorithm discrimination.” The Act is not enforceable until February 1, 2026, and applies to all persons doing business in Colorado, regardless of the number of consumers affected by the “high-risk systems.”

 

The use of AI in critical sectors, such as employment, housing, finance, education, and healthcare, is growing rapidly. Recognizing this, the Colorado legislature defined a “high-risk AI system” as one when deployed, makes, or is a substantial factor in making, a consequential decision. The Act then defines a “consequential decision” to mean, “a decision that has material legal or similarly significant effect on the provision or denial to any consumer of" employment, educational, financial, health-care, government, or legal services and opportunities.

 

Further, the act defines “Algorithm discrimination” as occurring when the use of AI results in an unlawful differential treatment or impact that disfavors an individual or group of individuals based on their perceived age, color, disability, ethnicity, and genetic information. For deployers, Colorado now requires them to not only use reasonable care, but also implement a risk management policy and program that is annually reviewed and renewed. The implementation of a risk management policy can aid in identifying risk factors of discrimination, as well as necessary updates for legal compliance.

 

One goal of this legislation is likely to force developers to provide disclosures to users of information regarding the creation of the system, benefits, intended outputs, methods, and safeguards taken to avoid algorithmic discrimination.


To put it into perspective, college admissions committees are implementing AI for application reviews. For schools in Colorado, this would be classified as a “high-risk AI system” because it plays a key role in making a consequential decision for the consumer, namely educational opportunities. Although an admissions committee may find AI efficient in prioritizing desirable candidates, over-reliance on AI may create pitfalls. AI uses quantitative data and lacks the ability to ingest and recognize non-empirical qualities that benefit an academic environment. This can lead to homogenous university classes, over-populated majors in specific departments, professor scarcity, and a degradation of research endeavors. Whereas a human admissions reviewer can recognize the significance of national robotics awards despite an otherwise mediocre academic history.

 

Another focused intent of the legislation is to prevent inadvertent high-risk AI systems from causing physical damage in the health-care field. Specifically, medical data must be hyper-diversified in widely-used AI tools to prevent inaccurate results or “blind spots” on certain groups of people. For example, medical data generated from studies on men with cardiovascular disease should not solely feed an AI tool used to detect warning signs of a heart-attacks in both genders. Symptoms of a heart-attack differ between men and women, with men more likely to experience obvious symptoms (chest pains, light-headedness), whereas women’s symptoms include fatigue, night sweats, stomach and back pain. Unless the data set is balanced to include both men and women, the AI tool becomes unreliable.

 

The law gives the Colorado Attorney General exclusive enforcement authority and allows for up to $20,000 per violation. It also provides that developers must disclose to the attorney general and the deployers of the system any known or reasonably foreseeable risk of algorithmic discrimination and if discovered, within 90 days of its discovery, provide prompt notification.

 

As seen with the 2018 California Consumer Privacy Act, other states tend to copy each other’s legislation on novel issues. Therefore, AI developers are wise to watch legislative proceedings in other states as Colorado’s new AI laws become effective.


52 views0 comments

Recent Posts

See All

Komentáře


Post: Blog2_Post
bottom of page