Aspen Policy Academy

Utah Adopts Fellows’ AI Evaluation Recommendations

  • Article Published May 6, 2025

We are pleased to announce that the Utah Office of Artificial Intelligence Policy (OAIP) is adopting artificial intelligence evaluation recommendations from our 2024 Science and Technology Policy Fellows. Zach Boyd, OAIP’s director, announced this news in a recent feature on StateScoop.

As AI tools proliferate across governments and institutions, the United States is experiencing a simultaneous AI public trust crisis. Fellows Jordan Loewen-Colón, Ayodele Odubela, and Jeanette Jordan recommended that the OAIP adopt a standardized evaluation framework for its partners, then publicize the criteria and a running list of its AI Learning Lab participants. They argued that these transparent steps would help Utah build public trust in AI innovation, modeling responsible AI development for other states.

“In our office, we try to bring a balance between optimism and caution. There’s so much potential, but also so many ways it can go wrong if we’re not careful,” Boyd told StateScoop. “We’re not just doing this for the sake of innovation. We’re doing it to serve people better, and to do that, we have to earn and keep their trust.”

While OAIP considers the full framework the Fellows proposed, Boyd said they have already made changes based on the recommendations. He said OAIP has developed procurement checklists that ask vendors tough questions about how their AI systems are built and whether they’ve been tested for bias. It also created templates for evaluating the risks of AI tools before deploying them and began piloting ways to explain how AI systems make decisions, so that state workers and the public can understand which tools the government is using and why. The office has also started partnering with local governments and universities to test frameworks in new settings and solicit community feedback.

Read the Fellows’ full proposal here.

Browse Related Articles

Green code against a black screen.

In Pentagon-Anthropic standoff, AI is real-time testing the balance of power in future of warfare

This article originally appeared on CNBC on February 27, 2026.
Mock code for an AI Large Language Model (LLM) that could intelligently answer questions.

How to Manage Misinformation in Large Language Models

This article originally appeared on Tech Policy Press on February 25, 2026.
Ring surveillance camera at a front door.

The Spy Next Door: Are Smart Doorbells Building a Surveillance State?

This article originally appeared on SC Media on January 28, 2026.