Using Accountability Inputs
While this Report focuses on information flows and evaluation, many commenters expressed interest in clarification of the second part of the AI Accountability Chain—namely, the attribution of responsibility and the determination of consequences. We therefore briefly address how the accountability inputs discussed above could feed into other structures to help hold entities accountable for AI system impacts. Three important structures are liability regimes, regulatory enforcement, and market initiatives. By supporting these structures, AI system information flows and evaluations can help promote proper assessment of legal and regulatory risk, provide public redress, and enable market rewards for trustworthy AI.

-
Liability Rules and Standards
As a threshold matter, we note that a great deal of work is being done to understand how existing laws and legal standards apply to the development, offering for sale, and/or deployment of AI technologies.
-
Regulatory Enforcement
Experts observe that regulatory tools and capacities have not kept pace with AI developments. Commenters discussed how regulation does or should intersect with AI systems, including the need for clarity and new regulatory tools or enforcement bodies.
-
Market Development
A market for trustworthy AI could gain traction if government and/or nongovernmental entities are in a position to grade or otherwise certify AI systems for trustworthy attributes.