Sorry, you need to enable JavaScript to visit this website.
Skip to main content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.

Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.

The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Standardize and Encourage Information Production

March 27, 2024
Earned Trust through AI System Assurance

Commenters stressed the importance of AI actors providing documentation on matters such as:

  • Problem specification;
  • Training data, including collection, provenance, curation, and management;
  • Model development;
  • Testing and verification;
  • Risk identification and mitigation;
  • Model output interpretability;
  • Safeguards in place to mitigate risks; and
  • System performance and limitations.59

Commenters also addressed the value of producing information about the inputs to and source of AI generated content, also known as “provenance.”60

NTIA agrees with commenters that appropriate transparency around AI systems is critical.61 Information should be pushed out to stakeholders in form and scope appropriate for the audience and risk level.62 Communications to the public should be in plain language. Transparency-oriented artifacts such as datasheets, model cards, system cards, technical reports, and data nutritional labels are promising and some should become standard industry practice as accountability inputs.63 Another type of information – provenance -- can inform people about aspects of AI system training data, when content is AI-generated, and the authenticity of the purported source of content. Source detection and identification are important aspects of information flow and information integrity.

 

 


59 See NIST AI RMF at 15, Sec. 3.4 (recommending this documentation as part of transparency).

60 See, e.g., Coalition for Content Provenance and Authenticity Comment; Witness Comment; International Association of Scientific, Technical and Medical (STM) Publishers Comment at 4.

61 See NIST AI RMF at 15 (“Meaningful transparency provides access to appropriate levels of information based on the stage of the AI lifecycle and tailored to the role or knowledge of AI actors or individuals interacting with or using the AI system. By promoting higher levels of understanding, transparency increases confidence in the AI system. This characteristic’s scope spans from design decisions and training data to model training, the structure of the model, its intended use cases, and how and when deployment, post-deployment, or end user decisions were made and by whom. Transparency is often necessary for actionable redress related to AI system outputs that are incorrect or otherwise lead to negative impacts.”).

62 See, e.g., IBM Comment at 6 (“noting that AI Accountability legislation, would need to account for, among other things, different risk, profiles and have [d]isclosure requirements for consumer facing AI systems[.]”); CDT Comment at 41-42 (noting that CDT’s “Civil Rights Standards for 21st Century Employment Selection Procedures” provide for different responsibilities for developers and deployers and different disclosures to deployers and to public); Google DeepMind Comment at 11 (AI accountability disclosures should include topline indication of how the AI system works, including “general logic and assumptions that underpin an AI application.” It is “good practice to highlight the inputs that are typically the most significant influences on outputs… [and any] inputs that were excluded that might otherwise have been reasonably expected to have been included (e.g., efforts made to exclude gender or race)”).

63 See, e.g., AI Policy and Governance Working Group Comment at 8-9 (citing Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru, Model Cards for Model Reporting, FAT* '19: Proceedings of the Conference on Fairness, Accountability, and Transparency, at 220-229 (Jan. 2019)) (at minimum model cards “should include the ‘reporting’ components of each of the principles in the technical companion of the AIBoR and reflect best practices for the documentation of the machine learning lifecycle”); Campaign for AI Safety Comment at 3 (“AI labs and providers should be required to publicly disclose the training datasets, model
characteristics, and results of evaluations.”).