Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
The Evolution of Safety: Stanford’s Mykel Kochenderfer Explores Responsible AI in High-Stakes Environments | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
news

The Evolution of Safety: Stanford’s Mykel Kochenderfer Explores Responsible AI in High-Stakes Environments

Date
May 09, 2025
Topics
Privacy, Safety, Security

Mykel Kochenderfer is an associate professor of aeronautics and astronautics at Stanford and senior fellow at Stanford HAI.

As AI technologies rapidly evolve, Professor Kochenderfer leads the charge in developing effective validation mechanisms to ensure safety in autonomous systems like vehicles and drones.

As he explains the complex work of creating advanced algorithms for safety systems in high-stakes and uncertain environments like driving a car or flying a plane, Professor Mykel Kochenderfer takes a moment to ground the discussion. He reaches for a model of the Wright Flyer, a biplane built by the Wright brothers in 1903. 

“The time between this,” Kochenderfer says, and then puts it down only to hold up a model of one of the earliest commercial airliners, “and this is only a few decades.”

And with each decade the aviation technology and safety systems have continued to progressively get better. Those improvements were incremental and involved both risk-taking and iterative testing, he said, so that “right now one of the safest places to be is up around 30,000 feet in a metal tube. To me that’s really remarkable.” 

An associate professor of aeronautics and astronautics at Stanford, Kochenderfer is the director of the Stanford Intelligent Systems Laboratory (SISL), where he conducts research on advanced algorithms and analytical methods for decision-making systems. He is also a new senior fellow at the Stanford Institute for Human-Centered AI (HAI). 

His team at SISL focuses on high-stakes systems where safety and efficiency are critical, such as air traffic control, unmanned aircraft, and autonomous vehicles. Air travel is safe, but Kochenderfer has spent his career working to make it even safer.

Before coming to Stanford, Kochenderfer worked on airspace modeling and aircraft collision avoidance at MIT’s Lincoln Laboratory. His early work led to the creation of the Airborne Collision Avoidance System X (ACAS X), an onboard safety system meant to prevent midair collisions that uses advanced algorithms to detect nearby aircraft and provide maneuver recommendations to pilots. 

How is AI safety defined?

What is safe depends on the application. For example, in aviation, safety is defined physically; we want to avoid metal hitting metal. In robotics and automated driving, we want to avoid hitting physical objects. 

Other AI systems require defining safety in a non-physical sense. For example, we may want our AI chat agent to offer answers to questions like why hitting flint with a knife creates a spark, but we would not want it to give a detailed description of how to make a bomb. Or we may not want our language model to spout racist comments. But these are very different kinds of safety considerations than what we’re looking at in aerospace. 

I teach an introductory seminar on potential downstream unintended societal consequences, such as how some of these systems could lead to job displacement or negatively impact how humans relate to each other. From a technical point of view, downstream consequences can be very difficult to predict.

What trends are emerging from your AI safety survey research?

The AI community is primarily focused on building AI systems, but there’s been relatively little focus on how to go about rigorously evaluating those systems before deployment. Assessing these systems is becoming very important because industry is making enormous investments, and we want to deploy these systems to reap the economic and other societal benefits. 

However, there have been some significant missteps, which can lead to real societal harm, not just to the reputations of the companies deploying these systems. What we’re interested in doing in our lab – and as part of the Stanford Center for AI Safety, in general – is to develop quantitative tools to assist in system validation. This has been really exciting. Last year, we offered a course on the validation of safety critical systems. So if you have an AI system and have to guarantee a certain level of performance and reliability, how do you go about doing that? It’s the first course of its kind at Stanford, and we just released a preprint of our new textbook called Algorithms for Validation. 

Where are the gaps in building safe and responsible AI?

One significant gap is how we do this validation efficiently. Whether building a collision avoidance system or a reliable language model, we do brute force simulation, often running tons and tons of simulations before finding even one failure. You need to have a statistically significant collection of failures to correctly identify the failure modes, characterize them, and estimate a failure probability. But running so many of these simulations can be very costly, so we’re pushing to make that as efficient as possible. It turns out you can use AI to help guide that process. So we can use AI models such as diffusion models, Markov Chain Monte Carlo, or others. There’s a whole variety of AI models that we can repurpose to more efficiently do validation. 

But how do you know you’ve done enough to make your model sufficiently high-fidelity? You can only help build a safety case because there is no end to the simulations you can run, and there’s no hard guarantee that your model didn’t miss something. You want to use the best possible model for that evaluation; if your model is too coarse or unrealistic, your conclusions will also be unrealistic. You may grossly underestimate or overestimate the safety of your system. 

The automotive industry invested a lot in building these high-fidelity simulations for automated driving, for instance. You want to make your simulations as realistic as possible and ensure that the distribution over the scenarios you run represents the real world as closely as possible. After rigorous simulation studies, you want to start with a limited deployment, as Waymo (the autonomous car service) has done. They focused on locations like San Francisco, and then drove many miles there, gaining experience and learning from experience. The information they learn from their limited deployment then helps inform potential changes to the models they use in simulation, and then gradually, over time, they expand their deployments to other areas.

In engineering we want to take baby steps. You wouldn’t want the first bridge you ever build to be the Golden Gate Bridge. You want to start with something much more basic and then build sophistication from there. And that’s exactly what happened with aviation. We started off with something like this, the Wright Flyer, in 1903 at Kitty Hawk, and then within the matter of a few decades we had commercial airliners. Part of that involved just incremental progress, and some amount of risk-taking, but right now one of the safest places to be is up around 30,000 feet in a metal tube. To me that’s really remarkable. 

What interests or worries you most about AI safety right now?

There has been a tremendous amount of well-founded enthusiasm for AI systems and for their deployment. The worry I had that inspired this latest book on validation is that we will become prematurely dependent on these systems before they’re properly understood. 

I want to be able to provide those in academia and industry the tools to properly validate their systems. Not just to make it through a certification process mandated by the government but to actually make sure these systems are safe. That’s in the interest of industry as well. A major accident can be catastrophic for a company, as we have seen in the automated driving space.

What current projects are you excited about?

On the research side, I’m very excited about validating these language models. Because these language models encode enormous amounts of common sense knowledge, and that’s what you need in order to build truly useful AI systems whether they’re robots in homes or aircraft assistance systems for pilots. There’s tremendous potential there. But we have to make sure these systems don’t hallucinate or provide harmful information. That’s a major focus of our research.

I’m also very excited on the education side, communicating these insights to the next generation of students, to industry and also through HAI to those in government to make policies that make sense.

Mykel Kochenderfer is an associate professor of aeronautics and astronautics at Stanford and senior fellow at Stanford HAI.

Share
Link copied to clipboard!
Contributor(s)
Scott Hadly

Related News

A Framework to Report AI’s Flaws
Andrew Myers
Apr 28, 2025
News

Pointing to "white-hat" hacking, AI policy experts recommend a new system of third-party reporting and tracking of AI’s flaws.

News

A Framework to Report AI’s Flaws

Andrew Myers
Ethics, Equity, InclusionGenerative AIPrivacy, Safety, SecurityApr 28

Pointing to "white-hat" hacking, AI policy experts recommend a new system of third-party reporting and tracking of AI’s flaws.

23andMe’s DNA Database Is Up For Sale. Who Might Want It, And What For?
Washington Post
Mar 25, 2025
Media Mention

After 23andMe announced that it’s headed to bankruptcy court, it’s unclear what happens to the mass of sensitive genetic data that it holds. Jen King, Policy Fellow at HAI comments on where this data could end up and be used for.

Media Mention
Your browser does not support the video tag.

23andMe’s DNA Database Is Up For Sale. Who Might Want It, And What For?

Washington Post
Privacy, Safety, SecurityIndustry, InnovationEthics, Equity, InclusionMar 25

After 23andMe announced that it’s headed to bankruptcy court, it’s unclear what happens to the mass of sensitive genetic data that it holds. Jen King, Policy Fellow at HAI comments on where this data could end up and be used for.

Signal Isn’t Infallible, Despite Being One Of The Most Secure Encrypted Chat Apps
NBC News
Mar 25, 2025
Media Mention

HAI Policy Fellow Riana Pfefferkorn explains the different types of risk protection the private messaging app Signal can and cannot offer its users.

Media Mention
Your browser does not support the video tag.

Signal Isn’t Infallible, Despite Being One Of The Most Secure Encrypted Chat Apps

NBC News
Privacy, Safety, SecurityMar 25

HAI Policy Fellow Riana Pfefferkorn explains the different types of risk protection the private messaging app Signal can and cannot offer its users.

OSZAR »