
To ensure that female scholars and others focused on AI receive the attention they deserve and overdue, TechCrunch has been publishing a series of interviews focusing on outstanding women who have contributed to the AI revolution. We're publishing these pieces throughout the year as the AI boom continues, highlighting key work that often goes unrecognized. Read more profiles here.
Arati Prabhakar is director of the White House Office of Science and Technology Policy and science advisor to President Joe Biden. Previously, she was the first woman to serve as director of the National Institute of Standards and Technology (NIST) and director of the Defense Advanced Research Projects Agency (DARPA).
Prabhakar earned a bachelor's degree in electrical engineering from Texas Tech University and a master's degree in electrical engineering from California Institute of Technology. In 1984, she became the first woman to earn a doctorate in applied physics from Caltech.
In a nutshell, how did you get started in AI?
I came to lead DARPA in 2012, at a time when machine learning-based AI was rapidly growing. We've done amazing things with AI, and AI has been everywhere, so that was our first clue that something big was happening. I took on this role at the White House in October 2022, and a month later ChatGPT arrived and captured everyone's imagination with generative AI. This created a moment for President Biden and Vice President Kamala Harris to push AI in the right direction, and that's what we've been doing for the past year.
What drew you to this field?
I like big, powerful technology. They always bring a light side and a dark side, and that's definitely the case here. As a technologist, the most exciting work I do is creating, discussing, and driving these technologies. Because ultimately, if we do it right, progress will happen.
What advice would you give to women looking to enter the AI field?
That's my advice to anyone who wants to get involved in AI. There are so many ways to contribute, from getting familiar with the technology and building it, to using it in a variety of applications, to working to manage the risks and harms of AI. Whatever you do, understand that this is a skill that brings both light and dark sides. Above all, go and do something big and useful. Because now is the time!
What are the most pressing issues facing AI as it evolves?
What I'm really interested in is what are the most pressing issues for us as a country as we move this technology forward? A lot of great work has been done to steer AI in the right direction and manage its risks. There is more work to be done, but the President's executive order and the White House Office of Management and Budget providing guidance to agencies on how to use AI responsibly are critical steps that get us on the right path.
And now I think it's two things. One is AI do It is done in a responsible manner to ensure it is safe, effective and reliable. The second is to leverage this to go big and solve some big challenges. This has potential in everything from health, education, economic decarbonization, weather forecasting, and more. It doesn't automatically happen, but I think it would be well worth the trip.
What are some issues AI users need to be aware of?
AI is already in our lives. AI powers the ads we see online and determines the next ad in our feed. That's behind the price you pay for a plane ticket. You may be behind on a “yes” or “no” on your mortgage application. So the first thing to do is to recognize how much it already exists in our environment. This can be good because of the creativity and possible scale. But it also comes with significant risks, and we all need to be smart users in a world powered or driven by AI.
What is the best way to build AI responsibly?
Like any powerful technology, if you have the ambition to do something with it, you have to take responsibility for it. This starts with recognizing that the performance of these AI systems carries enormous risk, and that different types of risks are involved depending on the application. For example, we know that generative AI can be used to increase creativity. But we also know that it could transform our information environment. We know this can cause safety and security issues.
There are many applications where AI can help us work much more efficiently and have a scope, scale, and reach we have never experienced before. But before you get to scale, it's a good idea to make sure it doesn't insert bias or destroy privacy. And this has a huge impact on work and workers. If we do this right, we can empower our employees by enabling them to do more and make more money, but if we're not careful, that won't happen. This is what President Biden has made clear we must achieve. That means ensuring that these technologies enable workers, not replace them.









