Reducing Information Integrity Risks
The spread of synthetic text generated by artificial intelligence (AI) has brought along risks to information integrity. To address these risks, this policy brief recommends for the National Institute of Standards and Technology (NIST) to develop community guidance encouraging digital platforms to make it clear when text is known to be generated by AI (“provenance”), and to allow users to see where else the same text appears if its origins are unknown (“fuzzy provenance”). This intervention could help enable users to determine the trustworthiness of the text they encounter online.
This brief was completed as part of a project for the 2024 Science and Technology Fellowship, an Aspen Tech Policy Hub program to teach science and technology experts how to impact policy.