To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch has been publishing a series of interviews focused on remarkable women who’ve contributed to the AI revolution. We’re publishing these pieces throughout the year as the AI boom continues, highlighting key work that often goes unrecognized. Read more profiles here.
Arati Prabhakar is director of the White House Office of Science and Technology Policy and the science adviser to President Joe Biden. Previously, she served as director of the National Institute of Standards and Technology (NIST) — the first woman to hold the position — and director of DARPA, the U.S. Defense Advanced Research Projects Agency.
Prabhakar has a bachelor’s degree in electrical engineering from Texas Tech University and earned her master’s in electrical engineering from the California Institute of Technology. In 1984, she because the first woman to earn a doctorate in applied physics from Caltech.
Briefly, how did you get your start in AI?
I came in to lead DARPA in 2012, and that was a moment where machine learning-based AI was burgeoning. We did amazing work with AI, and it was everywhere, so that was the first clue that something big was afoot. I came into this role at the White House in October 2022, and a month later, ChatGPT came out and captured everyone’s imagination with generative AI. That created a moment that President Biden and Vice President Kamala Harris seized upon to get AI on the right track, and that’s been the work that we’ve done over the last year.
What attracted you to the field?
I love big, powerful technologies. They always bring a bright side and a dark side, and that’s certainly the case here. The most interesting work I get to do as a technical person is creating, wrangling and driving these technologies, because ultimately — if we get it right — that’s where progress comes from.
What advice would you give to women seeking to enter the AI field?
It’s the same advice that I would give anyone who wants to participate in AI. There are so many ways to make a contribution, from getting steeped in the technology and building it, to using it for so many different applications, to doing the work to make sure we manage AI’s risks and harms. Whatever you do, understand that this is a technology that brings bright and dark sides. Most of all, go do something big and useful, because this is the time!
What are some of the most pressing issues facing AI as it evolves?
What I am really interested in is: What are the most pressing issues for us as a nation as we drive this technology forward? So much good work has been done to get AI on the right track and manage risks. We have a lot more to do, but the president’s executive order and White House Office of Management and Budget’s guidance to agencies about how to use AI responsibly are extremely important steps that put us on the right course.
And now I think the job is twofold. One is to make sure that AI does unfold in a responsible way so that it is safe, effective and trustworthy. The second is to use it to go big and to solve some of our great challenges. It has that potential for everything from health, to education, to decarbonizing our economy, to predicting the weather and so much more. That’s not going to happen automatically, but I think it’s going to be well worth the journey.
What are some issues AI users should be aware of?
AI is already in our lives. AI is serving up the ads that we see online and deciding what’s next in our feed. It’s behind the price you pay for an airline ticket. It might be behind the “yes” or “no” to your mortgage application. So the first thing is, just be aware of how much it is already in our environment. That can be good because of the creativity and the scale that’s possible. But that also comes with significant risks, and we all need to be smart users in a world that’s empowered — or driven, now — by AI.
What is the best way to responsibly build AI?
Like any potent technology, if your ambition is to use it to do something, you have to be responsible. That starts by recognizing that the power of these AI systems comes with enormous risks, and different kinds of risks depending on the application. We know you can use generative AI, for example, to boost creativity. But we also know it can warp our information environment. We know it can create safety and security problems.
There are many applications where AI allows us to be much more efficient and have scope, scale and reach that we’ve never had before. But you better make sure that it’s not embedding bias or destroying privacy along the way before you hit scale. And it has huge implications for work and for workers. If we get this right, it can empower workers by enabling them to do more and earn more, but that won’t happen unless we pay attention. And that’s what President Biden has been clear we must achieve: making sure that these technologies enable, not displace, workers.
More TechCrunch
Get the industry’s biggest tech news
Startups Weekly
- Startups are the core of TechCrunch, so get our best coverage delivered weekly.
TechCrunch Fintech
- The latest Fintech news and analysis, delivered every Sunday.
TechCrunch Mobility
- TechCrunch Mobility is your destination for transportation news and insight.
Source: techcrunch.com