Utah partnered with a nonprofit to boost its AI governance

When the Utah Office of AI Policy was established last year, officials understood that in order for the state to adopt artificial intelligence that serves residents equitably and transparently, it had to be built on sound governance and ethical principles.
Zach Boyd, the office’s director, said that’s why it partnered with the Aspen Institute’s Policy Academy on a yearlong collaboration designed to help state governments build responsible AI policy.
A report on the project, called “Implementing an AI Evaluation Framework,” was published last month. It outlines how Utah’s AI Office, one of the only state agencies in the nation solely dedicated to AI governance, can assess the efficacy of AI tools while also restoring public confidence in its regulation. The guidelines focus on fairness, transparency, privacy, accountability and human involvement.
“In our office, we try to bring a balance between optimism and caution. There’s so much potential, but also so many ways it can go wrong if we’re not careful,” Boyd said. “We’re not just doing this for the sake of innovation. We’re doing it to serve people better, and to do that, we have to earn and keep their trust.”
Boyd said it’s imperative for Utah to adopt AI technology and policies that align with the state’s values of family, religion and culture. He said the office’s key priorities include managing a regulatory sandbox, addressing deepfakes and supporting AI companions and uses in education.
“We’ve realized that the state really values family and culture, and so we are emphasizing things that are maybe neglected in some other in some other regions,” Boyd explained. “Things like AI companions on families and how that’s going to integrate into youth culture, how it’s going to impact things like romance and our religious communities.”
The academy’s guide encourages Utah OAIP to examine if its AI technology can be easily understood by state employees and residents, successfully avoids bias and can be trusted to make good decisions. It also encourages collecting ongoing feedback from the public to make sure AI systems meet everyone’s needs and values.
Jordan Loewen-Colon, an Aspen Policy Academy fellow and one of the project’s authors, said public trust is essential for state governments in developing sustainable AI adoption.
“A lot of folks think AI is a switch you can flip. It’s not. It’s a process and it needs governance from day one,” said Loewen-Colon, who also works as an adjunct assistant professor at Queen’s University in Ontario. “There’s a real risk in rushing to adopt AI without listening to communities first. You have to build that trust upfront or you lose it for good.”
Boyd said his office is still reviewing the project and has not yet decided whether to formally adopt the framework as part of state policy, but had made several changes based on its recommendations. He said OAIP developed procurement checklists that ask vendors tough questions about how their AI systems are built and whether they’ve been tested for bias. It also created templates for evaluating the risks of AI tools before deploying them and began piloting ways to explain how AI systems make decisions, so that state workers and the public can understand which tools the government is using and why.
“We’ve always had this mindset of doing things the right way, even if it takes a little longer. That’s playing out in how we approach AI,” Boyd said.
Boyd said his office also started partnering with local governments and universities, including state agencies in Maine and South Carolina, and the MILA Institute in Montreal, to test frameworks in new settings and solicit community feedback. Loewen-Colon said those are steps in the right direction.
“One of the hardest things is figuring out how to explain AI in a way that actually means something to regular people, not just experts,” he said.