Sorry, you need to enable JavaScript to visit this website.
Skip to main content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.

Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.


The site is secure.

The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Using Accountability Inputs

March 27, 2024
Earned Trust through AI System Assurance

While this Report focuses on information flows and evaluation, many commenters expressed interest in clarification of the second part of the AI Accountability Chain—namely, the attribution of responsibility and the determination of consequences. We therefore briefly address how the accountability inputs discussed above could feed into other structures to help hold entities accountable for AI system impacts. Three important structures are liability regimes, regulatory enforcement, and market initiatives. By supporting these structures, AI system information flows and evaluations can help promote proper assessment of legal and regulatory risk, provide public redress, and enable market rewards for trustworthy AI.

Graphic showing the AI Accountability Chain model


  • Liability Rules and Standards

    As a threshold matter, we note that a great deal of work is being done to understand how existing laws and legal standards apply to the development, offering for sale, and/or deployment of AI technologies. 

  • Regulatory Enforcement

    Experts observe that regulatory tools and capacities have not kept pace with AI developments.  Commenters discussed how regulation does or should intersect with AI systems, including the need for clarity and new regulatory tools or enforcement bodies.

  • Market Development

    A market for trustworthy AI could gain traction if government and/or nongovernmental entities are in a position to grade or otherwise certify AI systems for trustworthy attributes.