Quantcast

LEGAL NEWSLINE

Monday, November 4, 2024

Prepared remarks: Silicon Flatirons Conference on Privacy at the State Level

123

Phil Weiser | Phil Weiser Official Photo

It is a pleasure to join the State Center, the University of Colorado’s Silicon Flatirons Center, and Princeton University’s Center for Information Technology Policy for an important conversation on the future of technology policy.  Silicon Flatirons’ culture of engaged conversations across disciplines, with academics, practitioners, and policymakers all being heard, and with particular attention to the questions and perspectives of students, is truly special. Similarly, the State Center’s commitment to creative and forward-thinking training and resources is exemplary in the government sector and helps ensure attorneys are able to stay at the cutting edge in consumer protection.  I especially appreciate that this bipartisan conversation is focused on identifying challenges and solutions to the problems posed by emerging technologies.  Thank you, Brad Bernthal, Emily Myers, and Mihir Kshirsagar for your leadership and continued stewardship of these values.

In my talk, I will reflect on our work on data privacy and the challenges ahead in that realm.  But mostly, I will discuss our approach to artificial intelligence (AI), including our Department’s role in developing regulations and enforcement guidance as to Colorado’s new Anti-Discrimination in AI Law (ADAI).  In the case of both data privacy and AI, we have a true north—to be a state that protects consumers, welcomes entrepreneurs, and encourages innovation.  In my view, and I believe we have shown this in the data privacy context, these goals can and should go hand-in-hand.

Before addressing either our work on data privacy or AI, I will first discuss some foundational considerations that guide my thinking on the regulation of emerging technologies.  These principles ensure that we follow our true north and neither shut down innovation in this space nor abandon our commitment to responsibly protecting consumers.  Indeed, as I discuss, Colorado state government can be a model in how we use data and AI for good as well as how we protect consumers by adopting appropriate practices and guardrails that ensure that the use of data and AI does not create material risks or harms to consumers.

I. A Regulatory Environment that Promotes Innovation and Protects Consumers

The regulation of emerging technologies is a topic that merits rigorous reflection and careful analysis.  In some cases, regulators view emerging technologies only through the lens of the potential threats they represent.  This is a mistake.  Emerging technologies, from the Internet to nanotechnology to artificial intelligence – if implemented properly and responsibly – can deliver significant benefits to society.  If left unchecked, however, they can also create real harms to consumers and society.  Consequently, the core policy challenge is to tailor regulatory responses to address actual harms while allowing new technologies to emerge and deliver significant benefits to consumers.

In the European Union, unlike the U.S., emerging technologies are sometimes viewed through the lens of what is called the “precautionary principle.”  That principle counsels precaution—and even severe limitations—where there is any uncertainty about the impact of a new technology.  Taken to a logical extreme, this precautionary principle might mean, for example, that a new technology should not be tested or implemented at all if there are any fears about potential negative impacts.  By contrast, the harm principle asks whether the harm from a new technology exceeds the benefits.

To understand how the harm principle works in practice, consider the case of nanotechnology.  In 2011, the Obama Administration issued a guidance document on how to approach this technology.  Notably, it did not embrace the precautionary principle, but instead called for a balancing of harms against benefits done based on “the best available scientific evidence.”[1]  The document concluded that “regulation and oversight should provide sufficient flexibility to accommodate new evidence and learning and to take into account the evolving nature of information related to emerging technologies and their applications.”  It also emphasized the importance of both risk assessment and risk management and called for regulatory approaches that promote innovation while also advancing regulatory objectives, such as protection of health, the environment, and safety.  Finally, the document advocated adaptable regulation, stating that “[w]here possible, regulatory approaches should be performance-based and provide predictability and flexibility in the face of fresh evidence and evolving information.”

When a new technology emerges, a central question is whether a new regulatory regime is necessary and appropriate, or whether existing protections are sufficient.  In the case of the Internet, commentators debated whether new and specifically tailored regulatory responses were appropriate or whether existing law could suffice to meet the moment.  In arguing for reliance on existing laws, Judge Frank Easterbrook famously ridiculed calls for specific legal responses, suggesting that such a move resembled the call of a prior era for “a law of the horse.”[2]

As discussions around regulating the Internet picked up steam in the 1990s, there were a range of proposals for the best strategies for protecting consumers.  Initially, FTC Chair Robert Pitofsky suggested that the best model of protecting consumers’ data privacy was for Internet companies to disclose their privacy policies and be held accountable when they mislead consumers by not following their privacy policies.[3]  After attempting to implement this approach for several years, Chairman Pitofsky concluded that a more comprehensive data privacy regime was warranted.[4]  In 2012, the Obama Administration built on this suggestion and developed a Privacy Bill of Rights concept, urging Congress to adopt such a model in federal law.[5]  Reflecting its limited ability to enact significant legislation (even when it has bipartisan support), Congress has yet to pass any such a regime.  But a number of states have done so,[6] creating the challenge of ensuring that multiple state laws are interoperable with one another.

In the case of artificial intelligence, our nation is starting just the sort of conversation that happened in the 1990s around the Internet in general and data privacy in particular.  As the FTC did with data privacy in the 1990s, the agency has promised action on the AI front.  Its Director of Consumer Protection, for example, has stated that “If companies are making claims about their use of AI, or the capabilities of their AI, those claims have to be truthful and substantiated, or we are prepared to bring action.”[7]  At the Attorney General’s office, we have a similar commitment, and are well aware that AI can enable new forms of scams that we are already on the lookout for and raising awareness to prevent against.  In a different context (the use of an algorithm, not AI), the Consumer Financial Protection Bureau took action when Citigroup engaged in discrimination against Armenian Americans through the use of software that intentionally denied credit based on national origin, in violation of the Equal Credit Opportunity Act.[8]

As we debate the appropriate regulatory oversight model for artificial intelligence, a couple of threshold questions bear attention.  First, we must ask how any law will define AI.  Particularly at this time, this is no easy task, as differentiating AI from the use of algorithms or even software more broadly can prove challenging.  In short, if a regulatory model is less than careful, oversight of AI could operate unexpectedly (and undesirably) as a regulatory system for many software applications.  Second, we need to evaluate as to any AI-specific law, whether and why targeted and AI-specific measures—whether governing hiring decisions or consumer protection—are necessary and appropriate—as opposed to being governed by laws of general applicability.  Third, it is important not to adopt regulations pushed by incumbent industry leaders, who can easily adapt to them, that have the intended and likely effect of stifling new entry that would benefit consumers and society.  Finally, particularly as to state-level action, it’s crucial to evaluate whether oversight of an emerging technology will push the development of that technology elsewhere or raise costs to use the technology because regulatory oversight measures adopted by a single state are out of sync with those adopted elsewhere.

II. The Colorado Privacy Act Experience

In crafting the Colorado Privacy Act in 2021, we engaged in a thorough stakeholder process to ensure that we were able to work through a range of topics that would be addressed in the law.  Similarly, our process for developing regulations under the law was transparent and inclusive.  As we have enforced the Colorado Privacy Act over the past year, we have used a harm-based approach that can remain relevant and effective even in the face of technological change.  Consider, for example, the risk that sensitive data—which is defined very clearly in the law and limited to certain categories[9]—can be misused.  Cognizant of this harm, we have required that the use of any sensitive data be made available for use by an affirmative opt-in requirement under the Colorado Privacy Act.  In policing this requirement of the law, we have utilized the cure period—that allows for companies to come into compliance after an issue is raised—to provide flexibility through both enforcement and education.

For both the rulemaking and enforcement processes, we have worked to honor a fundamental principle from the Obama Administration’s nanotechnology principles guidance document—“[w]here possible, regulatory approaches should be performance-based and provide predictability and flexibility in the face of fresh evidence and evolving information.”[10]  In particular, we have avoided imposing prescriptive requirements, particularly ones grounded in current technology, that would potentially undermine innovation in the future.  Stated differently, Colorado’s privacy law and its implementation rely on identifying critical principles and allowing some flexibility to avoid unintended consequences.

By adopting a principle-based approach, we established a framework that is more likely to be compatible and interoperable with other state privacy laws.  To that end, we are committed to working with our colleagues in other states to ensure there is interoperability between the various state privacy laws.  On both the rulemaking and enforcement front, we have already and will continue to work alongside other state AG offices to ensure that companies do not face the risk that they can comply with the law of one state, but not another.

III. The Way Forward on Artificial Intelligence Oversight

In the case of AI, the Colorado General Assembly took action this spring to oversee AI, motivated, in part, by concerns of illegal discrimination.  As Representative Brianna Titone explained, in evaluating the use of AI to make automated hiring or firing decisions, a motivation for the law was to create a way to “double check that this process actually didn’t discriminate and have a bias against that person.”[11]  The goal of guarding against discrimination is important.  What remains open for question, however, is how to develop appropriate protections. Whether they should involve proactive measures, such as affirmative disclosure requirements, or be achieved through after-the-fact reviews is a critical question.  The latter approach is the more commonly used approach for safeguarding against discrimination under well-established legal principles.  If current safeguards are sufficient in this context (that is, if the imposition of affirmative disclosure requirements results in more costs than benefits to consumers), the recommended strategy is not to impose additional requirements on those using AI.

In analyzing the appropriate level of oversight for AI, it is critical that the United States stick to its tradition of a risk-based approach that follows the harm principle.  Notably, there are a range of powerful and compelling applications of AI that don’t raise any material risks and thus do not present any cause for concern.  When it comes to providing consumers with guidance on shopping opportunities or how to decorate their home, for example, AI can make life easier and better for consumers without causing any discernible harm, particularly if consumers can choose whether to receive such suggestions in the first place.

When it comes to government, AI promises innovative solutions to address important societal challenges, ranging from fighting wildfires and predicting adverse weather events to water management to reducing traffic and urban emissions.  In the case of wildfires, for example, AI can help shed light on “new possibilities for understanding the conditions that make fire more likely; spotting fire in its earliest stages; identifying precisely where responders need to be deployed; and tracking where fires are moving.”[12]  With regard to managing traffic flows, Boston and other cities are using AI to analyze traffic patterns and optimize traffic light plans to reduce congestion and stop-and-go emissions.[13]

For governments considering the use of AI, it is important to proceed thoughtfully and with appropriate guardrails.  Consider, for example, how Dutch tax authorities used a self-learning AI model to create risk profiles and attempt to prevent childcare benefit fraud and penalized families based on the system’s risk indicators alone.  By using a flawed algorithm and failing to use any human oversight or risk assessment, as many as 26,000 parents were wrongly accused of committing child benefits fraud, tens of thousands of families were pushed into debt, and, stunningly, over a thousand children were wrongly taken into foster care.[14]

In the case of the Department of Law, we are embracing the responsible use of AI.  We will use it, however, where it does not pose material risks.  And, like with data privacy, we will ensure that those using AI in our office do so with proper oversight and responsible safeguards in place.

When considering the role of affirmative obligations, it is important to focus them on scenarios where the use of AI would create material risks.  If regulatory oversight is broadened to address all possible uses of AI, it could chill a range of healthy and non-risky innovations.  Where there is reason to believe the information is sensitive—say, involving medical information—there is a reason for concerns about risk.  Appropriate guardrails in such cases might simply involve ensuring that there is human involvement—such as requiring that AI-generated diagnosis recommendations for medical patients are just that, only recommendations to doctors rather than final diagnoses that automatically prescribe medicine to patients without human involvement.

Another critical distinction for regulatory oversight of AI is whether the focus is on developers—those who design AI systems—or deployers—those who merely use them.  Increasingly, AI is being integrated into software programs such that broad requirements for deployers could apply to almost every public and private sector entity in many facets of their work, including in ordinary, non-sensitive applications.  By contrast, for those firms developing AI, protective oversight measures are more appropriate to ensure, for example, that they are engaging in a reasonable level of risk assessment as to the intended use cases for their products and services.

The debate over what type of regulatory regime should be developed for AI in the United States is now underway.  Ideally, the federal government will step up and develop oversight of AI in a thoughtful, forward-looking manner.  In commenting to the Department of Commerce, Colorado led a coalition of 23 states to suggest an appropriate regulatory strategy.[15]  In particular, we suggested that “commitments to robust transparency, reliable testing and assessment requirements, and after-the-fact enforcement” provide a very promising approach for overseeing AI.  We also urged that the focus be on high-risk AI systems, calling for a risk-based approach that would differentiate, for example, between decisions like medical treatment and entertainment choices.  We also noted that AI is inherently less risky where it makes suggestions for humans to act on rather than where it engages in automated decision-making.  Moreover, we called on the Commerce Department to foster “trust in AI systems through transparent and verifiable policies and practices driven by appropriate standards including a code of ethics.”  Finally, we noted the potential benefits from impact assessments and third-party audits for high-risk AI.

Governance over AI has an international dimension, too.  If implemented effectively, a thoughtful U.S. oversight model can enable a U.S.-led coalition of AI oversight to prevail over a rival initiative led by China.  In the case of the Internet, the U.S. and Western nations developed a framework for Internet governance at a time when there was no alternative.  With respect to AI, however, we can expect that China will advance its own model of AI and it will be very different in its governance and use cases than one developed in the United States.  As New York Times columnist Nicholas Kristof put it, “it is essential that the United States maintain its lead in artificial intelligence,” as “this is not a competition in which it is OK to be the runner-up to China.”[16]  Prevailing in this competition means we must both develop responsible AI technology and an effective governance framework that protects consumers, enables trust, and allows for ongoing innovation.

Unlike the intensive conversations that took place in crafting Colorado’s privacy law, the Colorado ADAI law was enacted very late in the session and lacked the same level of back-and-forth conversations with key stakeholders.  To ensure those critical conversations can occur before the law goes into effect, the sponsors pushed out the enactment date to 2026.  When the Governor signed the law, he highlighted a series of concerns that would need to be addressed in subsequent revisions of the law.  In particular, he stated that “stakeholders, including industry leaders, must take the intervening two years before this measure takes effect to fine tune the provisions and ensure that the final product does not hamper development and expansion of new technologies in Colorado that can improve lives across our state.”[17]

After the ADAI law was passed, leaders of Colorado’s technology sector and others expressed serious concerns about the impact of the law, highlighting a range of unintended consequences.  To respond to such concerns and provide some additional context on the nature of the changes that are warranted, Governor Polis, the principal sponsor of the law, Senator Robert Rodriguez, and I wrote a letter to key stakeholders, outlining our approach to this work and the stakeholder process to come.[18]  Similarly, another co-sponsor, Representative Titone, called for all stakeholders to work together and “understand what these issues are and how they can be abused and misused and have these unintended consequences.”[19]  As we learn more from stakeholders about this law, potential changes that need to be made, and the role of regulations that our Department can create, we will be keeping these issues top of mind.

As this work goes forward, it’s crucial to recognize the risks—and even perils—of Colorado trying to go at it alone on AI.  In the worst-case scenario, Colorado’s ADAI law will be out of step with other states, chilling innovation and entrepreneurship here.  In the best-case scenario, the federal government will step in to develop an effective regulatory framework or, as we have done with data privacy, states will work closely together to advance inter-state cooperation and enable interoperability with other individual state laws.

With respect to Colorado’s work to develop an effective and thoughtful ADAI law, our goal is simple—to protect consumers, welcome entrepreneurs, and encourage innovation responsibly.  As we have demonstrated in the privacy context, these are goals that can and should go hand-in-hand.  The stakes that we get this right in the AI context are high; as Vice President Harris put it, we must “reject the false choice that suggests we can either protect the public or advance innovation.  We can and we must do both.”[20]

* * *

We recognize that, over the months ahead, there will be a series of conversations related to Colorado’s ADAI law.  As noted in the letter from the Governor, Senator Rodriguez, and me, there will be “a process to revise the new law, and minimize unintended consequences associated with its implementation.”[21] This process will include the work of the Colorado General Assembly’s Joint Technology Committee related to Artificial Intelligence and our rulemaking process.  The goal of this work will be to “allow for continued robust stakeholder feedback, ongoing education of state regulators and policymakers, and consideration of changes that will provide for a balanced regulatory scheme that prevents discrimination while supporting innovation in technology.”

For our Department, the promise of engaged stakeholder conversations is our normal modus operandi.  With the Colorado Privacy Act, we began the process with intensive stakeholder conversations.  In the case of our artificial intelligence law, we have a different mandate in that we are both looking at how to refine the law and, where appropriate, consider how to move ahead with regulatory guidance and enforcement guidance.  As we proceed to have such conversations, we will follow the same principles we adhered to in developing our privacy rules.  Notably, similar to what we did on our work for data privacy,[22] we are providing full transparency of our work through an accessible website where you can sign up to receive information about our rulemaking and submit or view comments on our pre-rulemaking considerations.[23] Indeed, if you are interested in our work on AI, please sign up for our stakeholder list at coag.gov/ai.  In all events, thank you for your engagement and please stay tuned for more information soon on both fronts for more information soon about opportunities to engage and updates on this important work.

 

[1] Memorandum from John P. Holdren, Assistant to the President on Science and Technology, Cass R. Sunstein Administrator, Office of Information and Regulatory Affairs & Islam A. Siddiqui, Chief Agricultural Negotiator, United States Trade Representative on Principles for Regulation and Oversight of Emerging Technologies to the Heads of Executive Departments and Agencies (Mar. 11, 2011).

[2] Frank H. Easterbrook, Cyberspace and the Law of the Horse, 1996 U. Chi. L. Forum 207 (1996).  By contrast, Lawrence Lessig made the case that some unique dimensions of the Internet did require specific legal responses.  See Lawrence Lessig, Code and Other Laws of Cyberspace (1999).

[3] Federal Trade Commission, Privacy Online: A Report to Congress 42 (1998).

[4] Federal Trade Commission, Privacy Online: Fair Information Practices in the Electronic Marketplace 36 (2000).

[5] The White House, Consumer Data Privacy in a Networked World: A Framework for Protecting Privacy and Promoting Innovation in the Global Digital Economy 35-40 (2012).

[6] Which States Have Consumer Data Privacy Laws?, Bloomberg Law (Mar. 18, 2024),  https://pro.bloomberglaw.com/insights/privacy/state-privacy-legislation-tracker/.

[7] Ashley Gold, 1 Big Thing: FTC’s New War on False AI Claims, Axios: AI+ (Jun. 12, 2024),

https://www.axios.com/newsletters/axios-ai-plus-696af6f0-27e0-11ef-be44-69b46cea0bb5.html?chunk=1&utm_term=emshare#story1.

[8] CFPB Orders Citi to Pay $25.9 Million for Intentional, Illegal Discrimination Against Armenian Americans, Consumer Financial Protection Bureau (Nov. 8, 2023), https://www.consumerfinance.gov/about-us/newsroom/cfpb-orders-citi-to-pay-25-9-million-for-intentional-illegal-discrimination-against-armenian-americans/.

[9] In particular, the Colorado Privacy Act defines sensitive data as: “(a) Personal data revealing racial or ethnic origin, religious beliefs, a mental or physical health condition or diagnosis, sex life or sexual orientation, or citizenship or citizenship status; (b) Genetic or biometric data that may be processed for the purpose of uniquely identifying an individual; or (c) Personal data from a known child.”  C.R.S. § 6-1-1303(24).

[10] Memorandum from John P. Holdren et al., supra.

[11] Bente Birkland, Colorado Has a First-in-the-Nation Law for AI—But What Will It Do?, CPR News (Jun. 17, 2024, 4:00 AM), https://www.cpr.org/2024/06/17/colorado-artificial-intelligence-law-implementation-ramifications.

[12] Carl Smith, An AI Approach to Firefighting, Governing, Summer 2024, at 17, available at  https://www.governing.com/magazine/californias-high-tech-approach-to-preventing-wildfires.  As for weather and hurricane forecasting, see William J. Broad, Artificial Intelligence Gives Weather Forecasters a New Edge, N.Y. Times (Jul, 29, 2024), https://www.nytimes.com/interactive/2024/07/29/science/ai-weather-forecast-hurricane.html.

[13] Steph Solis, Boston’s partnership with Google is its latest AI experiment, Axios (Aug. 12, 2024), https://www.axios.com/local/boston/2024/08/12/bostons-partnership-with-google-is-its-latest-ai-experiment.

[14]Melissa Heikkilä, Dutch scandal serves as a warning for Europe over risks of using algorithms, Politico (Mar. 29, 2022), https://www.politico.eu/article/dutch-scandal-serves-as-a-warning-for-europe-over-risks-of-using-algorithms/

[15] Memorandum from State Attorneys General on Artificial Intelligence (“AI”) system accountability measures and policies to Ms. Stephanie Weiner, Acting Chief Counsel, National Telecommunications and Information Administration (Jun. 12, 2023), https://coag.gov/app/uploads/2023/06/NTIA-AI-Comment.pdf.

[16] Nicholas Kristof, Will A.I. Save Us or Kill Us First?, N.Y. Times, Jul. 28, 2024, at SR3, available at  https://www.nytimes.com/2024/07/27/opinion/ai-advances-risks.html.

[17] Letter from Jared Polis, Governor of Colorado, to Colorado General Assembly (May 17, 2024), https://drive.google.com/file/d/1i2cA3IG93VViNbzXu9LPgbTrZGqhyRgM/view.

[18] Letter from Jared Polis, Governor of Colorado, Phil Weiser, Attorney General of Colorado & Robert Rodriguez, Majority Leader, Colorado Senate, to Innovators, Consumers, and All Those Interested in the AI Space (Jun. 13, 2024), https://newspack-coloradosun.s3.amazonaws.com/wp-content/uploads/2024/06/FINAL-DRAFT-AI-Statement-6-12-24-JP-PW-and-RR-Sig.pdf.  In particular, the letter focused on five areas for important work:

  • Refining the definition of high risk artificial intelligence;
  • Focusing primarily on those who develop high risk systems, rather than those (including small companies) who deploy them;
  • Shifting from a system based on affirmative disclosure obligations to one that involves the more traditional after-the-fact review and enforcement;
  • Specifying that any consumer right to appeal specified in the law involves an appeal to the Attorney General’s office for potential investigation and, where relevant, the Colorado Civil Rights Commission; and
  • Investigating how Colorado can ensure that we continue to welcome innovation and remain as a technology hub.
Id.

[19] Birkland, supra note 9.

[20] Remarks by Vice President Harris on the Future of Artificial Intelligence (Nov. 1, 2023), https://www.whitehouse.gov/briefing-room/speeches-remarks/2023/11/01/remarks-by-vice-president-harris-on-the-future-of-artificial-intelligence-london-united-kingdom/.

[21] Letter from Polis, Weiser & Rodriguez, supra note 11.

[22] See coag.gov/cpa; https://coag.gov/app/uploads/2022/04/Pre-Rulemaking-Considerations-for-the-Colorado-Privacy-Act.pdf.

[23] See coag.gov/ai

Original source can be found here.

ORGANIZATIONS IN THIS STORY

More News