Outgoing White House AI director explains policy challenges ahead
They’re making good progress on this and anticipate having that framework in place by early 2023. There’s some nuance here — different people interpret risk differently, so it’s important. It is important to have a general understanding of what risk is and what appropriate risk reduction approaches can be, and what potential harms may be.
You talked about bias in AI. Are there ways the government can use regulation to help solve that problem?
There are both regulatory and non-regulatory ways to help. There are many existing laws that prohibit the use of any kind of system that discriminates and that includes AI. A good approach is to see how the existing law has been applied, then clarify it specifically for AI and identify where the vulnerability is.
NIST debuted with a report earlier this year about bias in AI. They mentioned several approaches that should be considered as it relates to management in these areas, but most of it concerns best practices. So it’s things like making sure we’re constantly monitoring the system, or we’re providing an opportunity for recourse if people believe they’ve been harmed.
It makes sure we’re documenting the ways in which these systems are trained and on what data, so we can make sure we understand where bias might be. It is also about accountability and ensuring that developers and users who implement these systems are held accountable when these systems are not developed or used appropriately. .
In your opinion, what is the right balance between public and private AI development?
The private sector is investing significantly more than the federal government in AI research and development. But the nature of that investment is completely different. Investments are taking place in the private sector heavily in products or services, while the federal government is investing in cutting-edge, long-term research that doesn’t necessarily have market incentives to invest. but potentially opens the door to brand-new ways of implementing AI. So, in terms of R&D, it is very important that the federal government invest in areas where there is no reason to promote that industry to invest.
Industry can work with the federal government to help define what some of those real-world challenges are. That would work for US federal investment.
There is a lot that government and industry can learn from each other. Governments can learn about best practices or lessons learned the industry has developed for their companies, and governments can focus on the right railings needed for AI.