Women in AI: Kathi Vidal at the USPTO has been working on AI since the early 1990s
To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch is launching a series of interviews focusing on remarkable women who’ve contributed to the AI revolution. We’ll publish several pieces throughout the year as the AI boom continues, highlighting key work that often goes unrecognized. Read more profiles here.
Kathi Vidal is an American intellectual property lawyer and former engineer who serves as director of the United States Patent and Trademark Office (USPTO).
Vidal began her career as an engineer for General Electric and Lockheed Martin, working in the areas of AI, software engineering and circuitry. She has a Bachelor’s degree in electrical engineering from Binghamton University, a Master’s degree in electrical engineering from Syracuse University and a JD from the University of Pennsylvania Law School.
Q&A
Briefly, how did you get your start in AI? What attracted you to the field?
When I started college at 16, I was interested in scientific problem solving. I had an oscilloscope that I purchased at a garage sale that I was constantly tinkering with, and I loved working on my Dodge Dart! This early fascination led me to GE’s Edison Engineering Program as one of two women selected into the program. We engaged in weekly technical problem-solving across engineering and scientific disciplines on top of rotational work assignments in different technical fields. When I was approached to work on a three-person team working in the field of artificial intelligence, I jumped at it. The ability to engage in new, groundbreaking work in the early 1990s that could be applied across scientific and engineering disciplines to come up with ways to more creatively innovate was thrilling. I saw it as a way of getting away from the rigidity of current design principles and to more closely emulate the nuances humans bring to problem-solving.
What work are you most proud of (in the AI field)?
It would be a tie between my current work on U.S. government AI policies at the intersection of AI and innovation and my work developing the first AI fault diagnostic system for aircraft. As to the latter, I worked across neural networks, fuzzy logic and expert systems to build a resilient, self-learning system in the early 1990s. Though I left for law school before the system was deployed, I was excited to create something new in the relatively nascent AI space (compared to where AI is today) and to work with the PhDs at GE Research to share learnings across our projects. I was so excited about AI that I ended up writing my Master’s thesis on my work.
How do you navigate the challenges of the male-dominated tech industry, and, by extension, the male-dominated AI industry?
Candidly, in the 1990s, the way I navigated the challenges in the engineering field was by conforming (without realizing I was conforming). It was a different time, and it probably goes without saying that most leadership positions in engineering and in law firms were more male-dominated than they are today. It was suggested to me by some of my male colleagues that I needed to learn how to laugh less. But I found joy in life and what I was doing! I remember speaking in front of a room full of women at a women’s conference we created in the mid-2000s (before women’s conferences became the norm). When I finished speaking, a number of audience members came up to congratulate me on my speech and tell me that they had never seen me so lively and animated. And I was speaking about patent law. It was then that I had an “aha” moment — being appreciated for being authentic was how I felt included and successful at my job.
Since that time, I’ve been deliberate about being authentic and creating inclusive environments where women can thrive. For example, I’ve revamped hiring and promotion practices in organizations where I’ve served. Most recently at USPTO, our agency saw a nearly 5% increase in diversity among our leadership ranks within one year due to these changes. I’ve worked to champion policies that open the doors for more women to participate in innovation, recognizing that while more than 40% of those who use our free legal services to file patent applications identify as women, only 13% of patented inventors are women — so we’re working hard to close that gap. Along with U.S. Secretary of Commerce Gina Raimondo, I founded the Women’s Entrepreneurship initiative across the U.S. Commerce Department to empower more women business leaders and arm them with the information and assistance they need to be successful, and I proudly advance policies to uplift not only women but other communities that have been historically underrepresented in our innovation ecosystem through my work helping lead the Council for Inclusive Innovation and the Economic Development Administration’s National Advisory Council on Innovation and Entrepreneurship. I also spend time mentoring others in my free time, sharing lessons learned and developing the next generation of leaders and advocates. I obviously can’t do any of this work alone — it’s all through and with like-minded women and men.
What advice would you give to women seeking to enter the AI field?
First, we need you, so keep going. It’s important to have women involved in shaping AI models of the future in order to mitigate bias or safety risks. And there are so many trailblazers out there — Fei-Fei Li at Stanford and Elham Tabassi at the National Institute of Standards and Technology (NIST), to name a couple. I’m honored to work alongside incredible leaders at the forefront of AI — Secretary Raimondo and Zoë Baird at the Department of Commerce, NIST Director Laurie Locascio, Copyright Office Director Shira Perlmutter and the new lead of the AI Safety Institute Elizabeth Kelly. It’s imperative that we all work together, throughout government and the private sector, to create the future, or it will be created for us. And it may not be the future we believe in or will want.
Second, find your tailwind and persist. Make the ask and put your goals out there to attract others to support you on your journey. Don’t take “no” personally. See “no” and resistance as a headwind. Find your tailwind and those mentors and sponsors who are bought into you, your success and what you can contribute in this terribly important field.
What are some of the most pressing issues facing AI as it evolves?
The U.S. is fortunate to lead the world in innovation by AI developers, and we therefore also have the responsibility to lead on policies that make AI safe and trustworthy and further our values. We are pursuing this in collaboration with other countries in several multilateral venues and bilaterally. USPTO has a long history of this kind of collaboration and leadership. To ensure American values are embedded into AI policy, our AI and Emerging Technology Partnership that we began in 2022 supports the Biden administration’s whole-of-government approach to AI, including the National AI Initiative, to advance U.S. leadership in AI. Most recently, we published guidance clarifying the level of human contribution needed for patenting AI-enabled inventions, promoting human ingenuity and incentivizing investment for AI-enabled innovations while not hindering future innovation by unnecessarily locking up innovation or stifling competition. To our knowledge, it’s the first such guidance in the world. We must achieve the same goals and balance when it comes to our creative sector, and we’re working with stakeholders and the Copyright Office to do so.
While we at USPTO are focused on harnessing AI to democratize and scale innovation, as well as policy at the intersection of AI and intellectual property, we’re also working with NIST and the National Telecommunications and Information Administration (NTIA) on other pressing issues, including the safe, secure and trustworthy development and use of AI and mechanisms that can create earned trust in AI.
What are some issues AI users should be aware of?
As President Biden stated in his executive order on AI, responsible AI use has the potential to help solve urgent challenges and make our world more prosperous, productive, innovative and secure, while irresponsible use could exacerbate societal harms “such as fraud, discrimination, bias and disinformation; displace and disempower workers; stifle competition; and pose risks to national security.” AI users need to be thoughtful and deliberate in their use of AI so they do not perpetuate those harms. One key way is to stay abreast of the work NIST is doing through its AI Risk Management Framework and its U.S. AI Safety Institute.
What is the best way to responsibly build AI?
Together. To responsibly build AI, we need not only government intervention and policies, but also industry leadership. President Biden recognized this when he convened private AI companies and secured their voluntary commitments to manage the risks posed by AI. We in U.S. government also need your feedback as we do our work. We’re regularly seeking your input through public engagements as well as requests for information or comments we issue in the Federal Register. For example, through our AI and Emerging Technology Partnership, we sought your comments before designing our Inventorship Guidance for AI-Assisted Inventions. We’re using your comments in response to the Copyright Office’s request for information related to the intersection of copyright and AI to advise the Biden administration on national and international strategies. NIST asked for your input and information to support safe, secure and trustworthy development and use of AI and NTIA asked for your feedback on AI accountability. And we at USPTO will soon issue another request for comment to explore ways in which our patent laws may need to evolve to account for the way AI may influence other patentability factors or may create a minefield of “prior art,” making it harder to patent. The best thing you can do is stay tuned to the administration’s work on AI, including NIST’s, USPTO’s, NTIA’s and the Department of Commerce at large, and to provide your feedback so we can build responsible AI together.
How can investors better push for responsible AI?
Investors should do what they do best — invest in the work. Progress in responsible AI can’t come out of thin air; we need companies in this space doing the hard work to bring about the responsible AI companies of tomorrow. We need investors to ask the right questions, to push for responsible development, and to use their money to support the responsible AI of the future. Further, they should impress upon companies they invest in the need to prioritize IP protection, cybersecurity and not accepting investments from suspicious sources. All three are necessary to ensure control over the work and to ensure that work creates jobs and bolsters national security.