Reducing Information Integrity Risks

by Marilyn Zhang

The spread of synthetic text generated by artificial intelligence (AI) has brought along risks to information integrity. To address these risks, this project recommends for the National Institute of Standards and Technology (NIST) to develop community guidance encouraging digital platforms to make it clear when text is known to be generated by AI (“provenance”), and to allow users to see where else the same text appears if its origins are unknown (“fuzzy provenance”). This intervention could help enable users to determine the trustworthiness of the text they encounter online.

This project was completed as part of the 2024 Science & Technology Fellowship, an Aspen Policy Academy program teaching science and technology experts the skills they need to impact policy.

View the Policy Brief
View the Operational Plan
View the User Journeys