Responsive Menu
Add more content here...

Katya Klinova is a specialist in AI governance focused on means to ensure the benefits of AI advancement are broadly and equitably shared. She has worked at Partnership on AI, Google, and the United Nations. All reflections are in her personal capacity. 

Published in November 2023.

Tell us a little about your career journey: How did you come to work in AI policy?

Earlier in my career, I was never actually aiming to work on AI policy. I began by working on exciting AI-related projects at Google when I finished college. Eventually, I realized how powerful a technology AI was becoming and the sort of economic opportunity that it might bring. I went to graduate school to study policy and international development, and at the time AI policy was becoming a field unto itself. I was at the right place at the right time with the right set of backgrounds to work on AI policy, so my work in the space came naturally. 

One memorable moment was watching AlphaGo’s match against Lee Sedol. It was livestreamed at a Google campus cafe, normally a buzzing place. But that day you could hear a penny drop there: everyone’s gaze was glued to the match. I remember thinking that this technology was going to be consequential in more ways than I could imagine at the time. Subsequently, graduate school work helped me realize why the benefits of AI were not automatically going to be shared with everyone. 

What are some of the current AI policy challenges you’re working on?

I think there are a lot of very well meaning technologists working in this space. But there aren’t many people thinking about the incentive environment in which companies operate, how much that drives the direction of innovation, and what kind of uses AI is going to be put toward. The technology you are working on is not being developed in the vacuum. 

AI policy right now is too narrowly focused…We need broader participation on every front, from communities and labor unions.”

When I zoom out, I think that AI policy right now is too narrowly focused. There are larger macro-economic structures that impact AI, like immigration restrictions and taxation schemes, that are not commonly discussed as relevant for AI governance. We also need the conversation to include the workers who are experiencing forms of AI-enabled surveillance, or whose work is becoming digitized, whose job quality is becoming worse. A lot of low-income countries don’t have the capacity to develop or regulate these technologies—both development and regulation is very concentrated in a few countries. We need broader participation on every front, from communities and labor unions. 

We can learn lessons from the development of clean energy technologies. The drop in the price of clean energy did not happen magically or automatically based on the “natural” trajectory of the technology. It happened because there was a deliberate effort by policymakers, environmental activists, the scientific community, and the private sector to push for the development of “green” technology and fund R&D. 

What advice do you have for those interested in a similar career path? 

I find it difficult to give career advice, but I would say that we really need more people working on AI policy with different perspectives and different skill sets. So don’t doubt whether or not you have a role to play, because we are way behind on the amount of thinking we need to do around AI governance. Different perspectives are especially important because we need to figure out what’s missing and scope out new directions in addition to the ones that are already established. 

Would you recommend working at a tech company or going to public policy school?

Both can be very useful. If you work for a tech company, it’s important to make sure you don’t lock yourself into a bubble, which can happen. Tech companies are attractive for many people because they provide high-functioning and high-productivity work environments, and are often less bureaucratic and complex to navigate than other institutions. But if all of our AI policy thinking is coming from within the tech companies, that is a very very dangerous position to be in, so I encourage everyone to consider joining other types of institutions and help build their capacity to inform and guide the path of AI.

What skills do you think are important for success in AI policy, and how could readers acquire them?

“There’s no one-size-fits-all approach to skill building because there is such a wide range of skills that are relevant for AI governance

There’s no one-size-fits-all approach to skill building because there is such a wide range of skills that are relevant for AI governance. It’s important to have courage to advocate and push for what you believe in. Even if no one else is trying to, and even if it’s not a mainstream idea. It’s also important to regularly back up, make sure you have foundations, and be clear about why you’re pursuing what you’re pursuing. An important skill is prototyping things and then trying to scale them responsibly, moving beyond mainstream AI ideas driven by commercial incentives, and putting AI in service of needs that won’t be served by markets alone.

Are there any programs, resources, or books you’d especially recommend for those interested in AI policy?

This is part of a series of career profiles, aiming to make career stories and resources more accessible to people without easy access to mentorship and advice. If you have suggestions for what questions you’d like to see answered in these profiles, please fill out our feedback form

Other profiles