Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Response to OSTP’s Request for Information on the Development of an AI Action Plan | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
policyResponse to Request

Response to OSTP’s Request for Information on the Development of an AI Action Plan

Date
March 17, 2025
Topics
Regulation, Policy, Governance
Read Paper
abstract

In this response to a request for information issued by the National Science Foundation’s Networking and Information Technology Research and Development National Coordination Office (on behalf of the Office of Science and Technology Policy), scholars from Stanford HAI, CRFM, and RegLab urge policymakers to prioritize four areas of policy action in their AI Action Plan: 1) Promote open innovation as a strategic advantage for U.S. competitiveness; 2) Maintain U.S. AI leadership by promoting scientific innovation; 3) Craft evidence-based AI policy that protects Americans without stifling innovation; 4) Empower government leaders with resources and technical expertise to ensure a “whole-of-government” approach to AI governance.

Read Paper
Share
Link copied to clipboard!
Authors
  • headshot
    Caroline Meinhardt
  • Daniel Zhang
    Daniel Zhang
  • Rishi Bommasani
    Rishi Bommasani
  • Jennifer King
    Jennifer King
  • Russell Wald headshot
    Russell Wald
  • Percy Liang
    Percy Liang
  • Dan Ho headshot
    Daniel E. Ho

Related Publications

Safeguarding Third-Party AI Research
Kevin Klyman, Shayne Longpre, Sayash Kapoor, Rishi Bommasani, Percy Liang, Peter Henderson
Feb 13, 2025
Policy Brief
Safeguarding third-party AI research

This brief examines the barriers to independent AI evaluation and proposes safe harbors to protect good-faith third-party research.

Policy Brief
Safeguarding third-party AI research

Safeguarding Third-Party AI Research

Kevin Klyman, Shayne Longpre, Sayash Kapoor, Rishi Bommasani, Percy Liang, Peter Henderson
Privacy, Safety, SecurityRegulation, Policy, GovernanceFeb 13

This brief examines the barriers to independent AI evaluation and proposes safe harbors to protect good-faith third-party research.

Assessing the Implementation of Federal AI Leadership and Compliance Mandates
Jennifer Wang, Mirac Suzgun, Caroline Meinhardt, Daniel Zhang, Kazia Nowacki, Daniel E. Ho
Jan 17, 2025
White Paper

This white paper assesses federal efforts to advance leadership on AI innovation and governance through recent executive actions and emphasizes the need for senior-level leadership to achieve a whole-of-government approach.

White Paper

Assessing the Implementation of Federal AI Leadership and Compliance Mandates

Jennifer Wang, Mirac Suzgun, Caroline Meinhardt, Daniel Zhang, Kazia Nowacki, Daniel E. Ho
Government, Public AdministrationRegulation, Policy, GovernanceJan 17

This white paper assesses federal efforts to advance leadership on AI innovation and governance through recent executive actions and emphasizes the need for senior-level leadership to achieve a whole-of-government approach.

Response to U.S. AI Safety Institute’s Request for Comment on Managing Misuse Risk For Dual-Use Foundation Models
Rishi Bommasani, Alexander Wan, Yifan Mai, Percy Liang, Daniel E. Ho
Sep 09, 2024
Response to Request

In this response to the U.S. AI Safety Institute’s (US AISI) request for comment on its draft guidelines for managing the misuse risk for dual-use foundation models, scholars from Stanford HAI, the Center for Research on Foundation Models (CRFM), and the Regulation, Evaluation, and Governance Lab (RegLab) urge the US AISI to strengthen its guidance on reproducible evaluations and third- party evaluations, as well as clarify guidance on post-deployment monitoring. They also encourage the institute to develop similar guidance for other actors in the foundation model supply chain and for non-misuse risks, while ensuring the continued open release of foundation models absent evidence of marginal risk.

Response to Request

Response to U.S. AI Safety Institute’s Request for Comment on Managing Misuse Risk For Dual-Use Foundation Models

Rishi Bommasani, Alexander Wan, Yifan Mai, Percy Liang, Daniel E. Ho
Regulation, Policy, GovernanceFoundation ModelsPrivacy, Safety, SecuritySep 09

In this response to the U.S. AI Safety Institute’s (US AISI) request for comment on its draft guidelines for managing the misuse risk for dual-use foundation models, scholars from Stanford HAI, the Center for Research on Foundation Models (CRFM), and the Regulation, Evaluation, and Governance Lab (RegLab) urge the US AISI to strengthen its guidance on reproducible evaluations and third- party evaluations, as well as clarify guidance on post-deployment monitoring. They also encourage the institute to develop similar guidance for other actors in the foundation model supply chain and for non-misuse risks, while ensuring the continued open release of foundation models absent evidence of marginal risk.

Pathways to Governing AI Technologies in Healthcare
Caroline Meinhardt, Alaa Youssef, Rory Thompson, Daniel Zhang, Rohini Kosoglu, Kavita Patel
Jul 15, 2024
Explainer
Your browser does not support the video tag.

Leading policymakers, academics, healthcare providers, AI developers, and patient advocates discuss the path forward for healthcare AI policy at closed-door workshop.

Explainer
Your browser does not support the video tag.

Pathways to Governing AI Technologies in Healthcare

Caroline Meinhardt, Alaa Youssef, Rory Thompson, Daniel Zhang, Rohini Kosoglu, Kavita Patel
HealthcareRegulation, Policy, GovernanceJul 15

Leading policymakers, academics, healthcare providers, AI developers, and patient advocates discuss the path forward for healthcare AI policy at closed-door workshop.

OSZAR »