What the departing White House chief tech advisor has to say on AI
President Biden’s administration will end within two months, and likely to depart with him is Arati Prabhakar, the top mind for science and technology in his cabinet. She has served as Director of the White House Office of Science and Technology Policy since 2022 and was the first to demonstrate ChatGPT to the president in the Oval Office. Prabhakar was instrumental in passing the president’s executive order on AI in 2023, which sets guidelines for tech companies to make AI safer and more transparent (though it relies on voluntary participation).
The incoming Trump administration has not presented a clear thesis of how it will handle AI, but plenty of people in it will want to see that executive order nullified. Trump said as much in July, endorsing the 2024 Republican Party Platform that says the executive order “hinders AI innovation and imposes Radical Leftwing ideas on the development of this technology.” Venture capitalist Marc Andreessen has said he would support such a move.
However, complicating that narrative will be Elon Musk, who for years has expressed fears about doomsday AI scenarios, and has been supportive of some regulations aiming to promote AI safety.
As she prepares for the end of the administration, I sat down with Prabhakar and asked her to reflect on President Biden’s AI accomplishments, and how AI risks, immigration policies, the CHIPS Act and more could change under Trump.
This conversation has been edited for length and clarity.
Every time a new AI model comes out, there are concerns about how it could be misused. As you think back to what were hypothetical safety concerns just two years ago, which ones have come true?
We identified a whole host of risks when large language models burst on the scene, and the one that has fully manifested in horrific ways is deepfakes and image-based sexual abuse. We’ve worked with our colleagues at the Gender Policy Council to urge industry to step up and take some immediate actions, which some of them are doing. There are a whole host of things that can be done—payment processors could actually make sure people are adhering to their Terms of Use. They don’t want to be supporting [image-based sexual abuse] and they can actually take more steps to make sure that they’re not. There’s legislation pending, but that’s still going to take some time.
Have there been risks that didn’t pan out to be as concerning as you predicted?
At first there was a lot of concern expressed by the AI developers about biological weapons. When people did the serious benchmarking about how much riskier that was compared with someone just doing Google searches, it turns out, there’s a marginally worse risk, but it is marginal. If you haven’t been thinking about how bad actors can do bad things, then the chatbots look incredibly alarming. But you really have to say, compared to what?
For many people, there’s a knee-jerk skepticism about the Department of Defense or police agencies going all in on AI. I’m curious what steps you think those agencies need to take to build trust.
If consumers don’t have confidence that the AI tools they’re interacting with are respecting their privacy, are not embedding bias and discrimination, that they’re not causing safety problems, then all the marvelous possibilities really aren’t going to materialize. Nowhere is that more true than national security and law enforcement.
I’ll give you a great example. Facial recognition technology is an area where there have been horrific, inappropriate uses: take a grainy video from a convenience store and identify a black man who has never even been in that state, who’s then arrested for a crime he didn’t commit. (Editor’s note: Prabhakar is referring to this story). Wrongful arrests based on a really poor use of facial recognition technology, that has got to stop.
In stark contrast to that, when I go through security at the airport now, it takes your picture and compares it to your ID to make sure that you are the person you say you are. That’s a very narrow, specific application that’s matching my image to my ID, and the sign tells me—and I know from our DHS colleagues that this is really the case—that they’re going to delete the image. That’s an efficient, responsible use of that kind of automated technology. Appropriate, respectful, responsible—that’s where we’ve got to go.
Were you surprised at the AI safety bill getting vetoed in California?
I wasn’t. I followed the debate, and I knew that there were strong views on both sides. I think what was expressed, that I think was accurate, by the opponents of that bill, is that it was simply impractical, because it was an expression of desire about how to assess safety, but we actually just don’t know how to do those things. No one knows. It’s not a secret, it’s a mystery.
To me, it really reminds us that while all we want is to know how safe, effective and trustworthy a model is, we actually have very limited capacity to answer those questions. Those are actually very deep research questions, and a great example of the kind of public R&D that now needs to be done at a much deeper level.
Let’s talk about talent. Much of the recent National Security Memorandum on AI was about how to help the right talent come from abroad to the US to work on AI. Do you think we’re handling that in the right way?
It’s a hugely important issue. This is the ultimate American story, that people have come here throughout the centuries to build this country, and it’s as true now in science and technology fields as it’s ever been. We’re living in a different world. I came here as a small child because my parents came here in the early 1960s from India, and in that period, there were very limited opportunities [to emigrate to] many other parts of the world.
One of the good pieces of news is that there is much more opportunity now. The other piece of news is that we do have a very critical strategic competition with the People’s Republic of China, and that makes it more complicated to figure out how to continue to have an open door for people who come seeking America’s advantages, while making sure that we continue to protect critical assets like our intellectual property.
Do you think the divisive debates around immigration, especially around the time of the election, may hurt the US ability to bring the right talent into the country?
Because we’ve been stalled as a country on immigration for so long, what is caught up in that is our ability to deal with immigration for the STEM fields. It’s collateral damage.
Has the CHIPS Act been successful?
I’m a semiconductor person starting back with my graduate work. I was astonished and delighted when, after four decades, we actually decided to do something about the fact that semiconductor manufacturing capability got very dangerously concentrated in just one part of the world [Taiwan]. So it was critically important that, with the President’s leadership, we finally took action. And the work that the Commerce Department has done to get those manufacturing incentives out, I think they’ve done a terrific job.
One of the main beneficiaries so far of the CHIPS Act has been Intel. There’s varying degrees of confidence in whether it is going to deliver on building a domestic chip supply chain in the way that the CHIPS Act intended. Is it risky to put a lot of eggs in one basket for one chip maker?
I think the most important thing I see in terms of the industry with the CHIPS Act is that today we’ve got not just Intel, but TSMC, Samsung, SK Hynix and Micron. These are the five companies whose products and processes are at the most advanced nodes in semiconductor technology. They are all now building in the US. There’s no other part of the world that’s going to have all five of those. An industry is bigger than a company. I think when you look at the aggregate, that’s a signal to me that we’re on a very different track.
You are the President’s chief advisor for science and technology. I want to ask about the cultural authority that science has, or doesn’t have, today. RFK Jr. is the pick for health secretary, and in some ways, he captures a lot of frustration that Americans have about our healthcare system. In other ways, he has many views that can only be described as anti-science. How do you reflect on the authority that science has now?
I think it’s important to recognize that we live in a time when trust in institutions has declined across the board, though trust in science remains relatively high compared with what’s happened in other areas. But it’s very much part of this broader phenomenon, and I think that the scientific community has some roles [to play] here. The fact of the matter is that despite America having the best biomedical research that the world has ever seen, we don’t have robust health outcomes. Three dozen countries have longer life expectancies than America. That’s not okay, and that disconnect between advancing science and changing people’s lives is just not sustainable. The pact that science and technology and R&D makes with the American people is that if we make these public investments, it’s going to improve people’s lives and when that’s not happening, it does erode trust.
Is it fair to say that that gap—between the expertise we have in the US and our poor health outcomes—explains some of the rise in conspiratorial thinking, in the disbelief of science?
It leaves room for that. Then there’s a quite problematic rejection of facts. It’s troubling if you’re a researcher, because you just know that what’s being said is not true. The thing that really bothers me is [that the rejection of facts] changes people’s lives, and it’s extremely dangerous and harmful. Think about if we lost herd immunity for some of the diseases for which we right now have fairly high levels of vaccination. It was an ugly world before we tamed infectious disease with the vaccines that we have.