Sorry, you need to enable JavaScript to visit this website.
Skip to main content
U.S. flag

An official website of the United States government

AI Accountability Policy Report

March 27, 2024
Earned Trust through AI System Assurance

Executive Summary

Artificial intelligence (AI) systems are rapidly becoming part of the fabric of everyday American life. From customer service to image generation to manufacturing, AI systems are everywhere.

Alongside their transformative potential for good, AI systems also pose risks of harm. These risks include inaccurate or false outputs; unlawful discriminatory algorithmic decision making; destruction of jobs and the dignity of work; and compromised privacy, safety, and security. Given their influence and ubiquity, these systems must be subject to security and operational mechanisms that mitigate risk and warrant stakeholder trust that they will not cause harm.

 

 

Commenters emphasized how AI accountability policies and mechanisms can play a key part in getting the best out of this technology. Participants in the AI ecosystem – including policymakers, industry, civil society, workers, researchers, and impacted community members – should be empowered to expose problems and potential risks, and to hold responsible entities to account.

AI system developers and deployers should have mechanisms in place to prioritize the safety and well-being of people and the environment and show that their AI systems work as intended and benignly. Implementation of accountability policies can contribute to the development of a robust, innovative, and informed AI marketplace, where purchasers of AI systems know what they are buying, users know what they are consuming, and subjects of AI systems – workers, communities, and the public – know how systems are being implemented. Transparency in the marketplace allows companies to compete on measures of safety and trustworthiness, and helps to ensure that AI is not deployed in harmful ways. Such competition, facilitated by information, encourages not just compliance with a minimum baseline but also continual improvement over time.

Read More about the AI Accountability Policy Report

NTIA has prepared other materials to help stakeholders more easily navigate the Artificial Intelligence Accountability Policy Report. Click below to learn more.

To promote innovation and adoption of trustworthy AI, we need to incentivize and support pre- and post-release evaluation of AI systems, and require more information about them as appropriate. Robust evaluation of AI capabilities, risks, and fitness for purpose is still an emerging field. To achieve real accountability and harness all of AI’s benefits, the United States – and the world – needs new and more widely available accountability tools and information, an ecosystem of independent AI system evaluation, and consequences for those who fail to deliver on commitments or manage risks properly.

Access to information by appropriate means and parties is important throughout the AI lifecycle, from early development of a model to deployment and successive uses, as recognized in federal government efforts already underway pursuant to President Biden’s Executive Order Number 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence of October 30, 2023 (“AI EO”). This information flow should include documentation about AI system models, architecture, data, performance, limitations, appropriate use, and testing. AI system information should be disclosed in a form fit for the relevant audience, including in plain language. There should be appropriate third-party access to AI system components and processes to promote sufficient actionable understanding of machine learning models.

Independent evaluation, by appropriate means and parties is important throughout the AI lifecycle, from early development of a model to deployment and successive uses, as recognized in federal government efforts already underway pursuant to President Biden’s Executive Order Number 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence of October 30, 2023 (“AI EO”). This information flow should include documentation about AI system models, architecture, data, performance, limitations, appropriate use, and testing. AI system information should be disclosed in a form fit for the relevant audience, including in plain language. There should be appropriate third-party access to AI system components and processes to promote sufficient actionable understanding of machine learning models.

Consequences for responsible parties, building on information sharing and independent evaluations, will require the application and/or development of levers – such as regulation, market pressures, and/or legal liability – to hold AI entities accountable for imposing unacceptable risks or making unfounded claims.

The AI Accountability Policy Report conceives of accountability as a chain of inputs linked to consequences. It focuses on how information flow (documentation, disclosures, and access) supports independent evaluations (including red-teaming and audits), which in turn feed into consequences (including liability and regulation) to create accountability. It concludes with recommendations for federal government action, some of which elaborate on themes in the AI EO, to encourage and possibly require accountability inputs.

Graphic showing the AI Accountability Chain model

In April 2023, the National Telecommunications and Information Administration (NTIA) released a Request for Comment (“RFC”) on a range of questions surrounding AI accountability policy. The RFC elicited more than 1,400 distinct comments from a broad range of stakeholders. In addition, we have met with many interested parties and participated in and reviewed publicly available discussions focused on the issues raised by the RFC.

Based on this input, we have derived eight major policy recommendations, grouped into three categories: Guidance, Support, and Regulations. Some of these recommendations incorporate and build on the work of the National Institute of Standards and Technology (NIST) on AI risk management. We also propose building federal government regulatory and oversight capacity to conduct critical evaluations of AI systems and to help grow the AI accountability ecosystem.

While some recommendations are closely linked to others, policymakers should not hesitate to consider them independently. Each would contribute to the AI accountability ecosystem and mitigate the risks posed by accelerating AI system deployment. We believe that providing targeted guidance, support, and regulations will foster an ecosystem in which AI developers and deployers can be properly held accountable, incentivizing the appropriate management of risk and the creation of more trustworthy AI systems.