Requisites for AI Accountability: Areas of Significant Commenter Agreement
The comments submitted to the RFC compose a large and diverse corpus of policy ideas to advance AI accountability. While there were significant disagreements, there was also a fair amount of support among stakeholders from different constituencies for making AI systems more open to scrutiny and more accountable to all.
This section provides a brief overview of significant plurality (if not majority) sentiments in the comments relating to AI accountability policy, along with NTIA reflections. The Developing Accountability Inputs: A Deeper Dive section provides a deeper treatment of these positions; most are congruent with the Report’s recommendations section.
-
Recognize Potential Harms and Risks
Many commenters expressed serious concerns about the impact of AI. AI system potential harms and risks have been well-documented elsewhere.
-
Calibrate Accountability Inputs to Risk Levels
NTIA concludes that a tiered approach to AI accountability has the benefit of scoping expectations and obligations proportionately to AI system risks and capabilities.
-
Ensure Accountability Across the AI Lifecycle and Value Chain
Commenters laid out good reasons to vest accountability with AI system developers who make critical upstream decisions about AI models and other components.
-
Develop
Sector-Specific Accountability With Cross-Sectoral Horizontal CapacityCommenters thought that additional accountability mechanisms should be tailored to the sector in which the system is deployed. AI deployment in sectors such as health, education, employment, finance, and transportation involve particular risks
-
Facilitate Internal and Independent Evaluations
Commenters noted that self-administered AI system assessments are important for identifying risks and system limitations, building internal capacity for ensuring trustworthy AI, and feeding into independent evaluations.
-
Standardize Evaluations As Appropriate
Commenters noted the importance of using standards to develop common criteria for evaluations. The use of standards in evaluations is important to implement replicable and comparable evaluations.
-
Facilitate Appropriate Access to AI Systems For Evaluation
Commenters identified the inability to gain access to AI system components as one of the chief barriers to AI accountability; what is needed are systems that can provide appropriate access for eligible evaluators, while controlling for access-related risks.
-
Standardize and Encourage Information Production
NTIA agrees with commenters that appropriate transparency around AI systems is critical. Information should be pushed out to stakeholders in form and scope appropriate for the audience and risk level. Communications to the public should be in plain language.
-
Fund and Facilitate Growth Of Accountability Ecosystem
Commenters noted that there currently is not an adequate workforce to conduct AI system evaluations, particularly given the demands of sociotechnical inquiries, the varieties of expertise entailed, and supply constraints on the relevant workforce.
-
Increase Federal Government Role
As our recommendations elaborate in the Recommendations Section, we support accelerated and coordinated government action to determine the best federal regulatory and non-regulatory approaches to the documentation, disclosure, access, and evaluation functions of the AI accountability chain.