Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Daniel E. Ho | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
peopleFaculty,Senior Fellow

Daniel E. Ho

William Benjamin Scott and Luna M. Scott Professor of Law | Professor of Political Science | Professor of Computer Science (by courtesy) | Senior Fellow, Stanford HAI | Senior Fellow, Stanford Institute for Economic and Policy Research | Director of the Regulation, Evaluation, and Governance Lab (RegLab)

Topics
Democracy
Government, Public Administration
Law Enforcement and Justice
Regulation, Policy, Governance
Dan Ho headshot
External Bio

Daniel E. Ho, J.D., Ph.D., is the William Benjamin Scott and Luna M. Scott Professor of Law, Professor of Political Science, and Senior Fellow at the Stanford Institute for Economic Policy Research at Stanford University. He directs the Regulation, Evaluation, and Governance Lab (RegLab) at Stanford, and is a Faculty Fellow at the Center for Advanced Study in the Behavioral Sciences and Senior Fellow of the Stanford Institute for Human-Centered Artificial Intelligence (HAI).

Share
Link copied to clipboard!

Latest Related to Daniel E. Ho

response to request

Response to OSTP’s Request for Information on the Development of an AI Action Plan

Rishi Bommasani, Daniel E. Ho, Percy Liang, Jennifer King, Russell Wald, Caroline Meinhardt, Daniel Zhang
Regulation, Policy, GovernanceMar 17

In this response to a request for information issued by the National Science Foundation’s Networking and Information Technology Research and Development National Coordination Office (on behalf of the Office of Science and Technology Policy), scholars from Stanford HAI, CRFM, and RegLab urge policymakers to prioritize four areas of policy action in their AI Action Plan: 1) Promote open innovation as a strategic advantage for U.S. competitiveness; 2) Maintain U.S. AI leadership by promoting scientific innovation; 3) Craft evidence-based AI policy that protects Americans without stifling innovation; 4) Empower government leaders with resources and technical expertise to ensure a “whole-of-government” approach to AI governance.

whitepaper

Assessing the Implementation of Federal AI Leadership and Compliance Mandates

Mirac Suzgun, Jennifer Wang, Kazia Nowacki, Daniel E. Ho, Caroline Meinhardt, Daniel Zhang
Government, Public AdministrationRegulation, Policy, GovernanceJan 17

This white paper assesses federal efforts to advance leadership on AI innovation and governance through recent executive actions and emphasizes the need for senior-level leadership to achieve a whole-of-government approach.

media mention
Your browser does not support the video tag.

Stanford AI Model Helps Locate Racist Deeds In Santa Clara County

KQED
Government, Public AdministrationRegulation, Policy, GovernanceLaw Enforcement and JusticeMachine LearningFoundation ModelsOct 21

Stanford's RegLab, directed by HAI Senior Fellow Daniel E. Ho, developed an AI model that helped Santa Clara accelerate the process of flagging and mapping restrictive covenants. 

All Related

AI Seeks Out Racist Language in Property Deeds for Termination
Bloomberg Law
Oct 17, 2024
media mention

Dan Ho, HAI Senior Fellow and director of the Stanford RegLab, discusses RegLab's AI model that analyzes decades of property records, helping to identify illegal racially restrictive language in housing documents.

AI Seeks Out Racist Language in Property Deeds for Termination

Bloomberg Law
Oct 17, 2024

Dan Ho, HAI Senior Fellow and director of the Stanford RegLab, discusses RegLab's AI model that analyzes decades of property records, helping to identify illegal racially restrictive language in housing documents.

Machine Learning
Regulation, Policy, Governance
Foundation Models
Law Enforcement and Justice
media mention
Congressional Boot Camp on AI

The Congressional Boot Camp on AI convenes staffers from both the House and Senate on Stanford University’s campus in California. Each session will feature world-class scholars from Stanford University, leaders from Silicon Valley, and pioneers from civil society organizations. The 2025 boot camp will be held August 11-13, 2025.

Congressional Boot Camp on AI

Oct 06, 2024

The Congressional Boot Camp on AI convenes staffers from both the House and Senate on Stanford University’s campus in California. Each session will feature world-class scholars from Stanford University, leaders from Silicon Valley, and pioneers from civil society organizations. The 2025 boot camp will be held August 11-13, 2025.

Response to U.S. AI Safety Institute’s Request for Comment on Managing Misuse Risk For Dual-Use Foundation Models
Rishi Bommasani, Daniel E. Ho, Percy Liang, Alexander Wan, Yifan Mai
Sep 09, 2024
response to request

In this response to the U.S. AI Safety Institute’s (US AISI) request for comment on its draft guidelines for managing the misuse risk for dual-use foundation models, scholars from Stanford HAI, the Center for Research on Foundation Models (CRFM), and the Regulation, Evaluation, and Governance Lab (RegLab) urge the US AISI to strengthen its guidance on reproducible evaluations and third- party evaluations, as well as clarify guidance on post-deployment monitoring. They also encourage the institute to develop similar guidance for other actors in the foundation model supply chain and for non-misuse risks, while ensuring the continued open release of foundation models absent evidence of marginal risk.

Response to U.S. AI Safety Institute’s Request for Comment on Managing Misuse Risk For Dual-Use Foundation Models

Rishi Bommasani, Daniel E. Ho, Percy Liang, Alexander Wan, Yifan Mai
Sep 09, 2024

In this response to the U.S. AI Safety Institute’s (US AISI) request for comment on its draft guidelines for managing the misuse risk for dual-use foundation models, scholars from Stanford HAI, the Center for Research on Foundation Models (CRFM), and the Regulation, Evaluation, and Governance Lab (RegLab) urge the US AISI to strengthen its guidance on reproducible evaluations and third- party evaluations, as well as clarify guidance on post-deployment monitoring. They also encourage the institute to develop similar guidance for other actors in the foundation model supply chain and for non-misuse risks, while ensuring the continued open release of foundation models absent evidence of marginal risk.

Regulation, Policy, Governance
Foundation Models
Privacy, Safety, Security
response to request
AI on Trial: Legal Models Hallucinate in 1 out of 6 (or More) Benchmarking Queries
Faiz Surani, Daniel E. Ho
May 23, 2024
news

A new study reveals the need for benchmarking and public evaluations of AI tools in law.

AI on Trial: Legal Models Hallucinate in 1 out of 6 (or More) Benchmarking Queries

Faiz Surani, Daniel E. Ho
May 23, 2024

A new study reveals the need for benchmarking and public evaluations of AI tools in law.

news
On the Societal Impact of Open Foundation Models
Sayash Kapoor, Rishi Bommasani, Daniel E. Ho, Percy Liang
and Arvind Narayanan
Feb 27, 2024
news

New research adds precision to the debate on openness in AI.

On the Societal Impact of Open Foundation Models

Sayash Kapoor, Rishi Bommasani, Daniel E. Ho, Percy Liang
and Arvind Narayanan
Feb 27, 2024

New research adds precision to the debate on openness in AI.

news
Transparency of AI EO Implementation: An Assessment 90 Days In
Kevin Klyman, Christie M. Lawrence, Rohini Kosoglu, Daniel E. Ho, Hamzah Daud, Caroline Meinhardt, Daniel Zhang
Feb 22, 2024
explainer
Your browser does not support the video tag.

The U.S. government has made swift progress and broadened transparency, but that momentum needs to be maintained for the next looming deadlines.

Transparency of AI EO Implementation: An Assessment 90 Days In

Kevin Klyman, Christie M. Lawrence, Rohini Kosoglu, Daniel E. Ho, Hamzah Daud, Caroline Meinhardt, Daniel Zhang
Feb 22, 2024

The U.S. government has made swift progress and broadened transparency, but that momentum needs to be maintained for the next looming deadlines.

Regulation, Policy, Governance
Government, Public Administration
Your browser does not support the video tag.
explainer
Transparency of AI EO Implementation: An Assessment 90 Days In
Kevin Klyman, Christie M. Lawrence, Rohini Kosoglu, Daniel E. Ho, Hamzah Daud, Caroline Meinhardt, Daniel Zhang
Feb 21, 2024
news

The U.S. government has made swift progress and broadened transparency, but that momentum needs to be maintained for the next looming deadlines.

Transparency of AI EO Implementation: An Assessment 90 Days In

Kevin Klyman, Christie M. Lawrence, Rohini Kosoglu, Daniel E. Ho, Hamzah Daud, Caroline Meinhardt, Daniel Zhang
Feb 21, 2024

The U.S. government has made swift progress and broadened transparency, but that momentum needs to be maintained for the next looming deadlines.

news
Daniel E. Ho's Testimony Before the California Senate Governmental Organization Committee and the Senate Budget and Fiscal Review Subcommittee No. 4 on State Administration and General Government
Daniel E. Ho
Feb 21, 2024
testimony

In this testimony presented in the California Senate Hearing “California at the Forefront: Steering AI Towards Ethical Horizons,” Daniel E. Ho offers three recommendations for how California should lead the nation in responsible AI innovation by nurturing and attracting technical talent into public service, democratizing access to computing and data resources, and addressing the information asymmetry about AI risks.

Daniel E. Ho's Testimony Before the California Senate Governmental Organization Committee and the Senate Budget and Fiscal Review Subcommittee No. 4 on State Administration and General Government

Daniel E. Ho
Feb 21, 2024

In this testimony presented in the California Senate Hearing “California at the Forefront: Steering AI Towards Ethical Horizons,” Daniel E. Ho offers three recommendations for how California should lead the nation in responsible AI innovation by nurturing and attracting technical talent into public service, democratizing access to computing and data resources, and addressing the information asymmetry about AI risks.

Regulation, Policy, Governance
testimony
Generating Medical Errors: GenAI and Erroneous Medical References
Kevin Wu, Eric Wu, James Zou, Daniel E. Ho
Feb 12, 2024
news

A new study finds that large language models used widely for medical assessments cannot back up claims.

Generating Medical Errors: GenAI and Erroneous Medical References

Kevin Wu, Eric Wu, James Zou, Daniel E. Ho
Feb 12, 2024

A new study finds that large language models used widely for medical assessments cannot back up claims.

Healthcare
news
Hallucinating Law: Legal Mistakes with Large Language Models are Pervasive
Daniel E. Ho
Matthew Dahl, Varun Magesh, Mirac Suzgun
Jan 11, 2024
news

A new study finds disturbing and pervasive errors among three popular models on a wide range of legal tasks.

Hallucinating Law: Legal Mistakes with Large Language Models are Pervasive

Daniel E. Ho
Matthew Dahl, Varun Magesh, Mirac Suzgun
Jan 11, 2024

A new study finds disturbing and pervasive errors among three popular models on a wide range of legal tasks.

news
Considerations for Governing Open Foundation Models
Ashwin Ramaswami, Arvind Narayanan, Kevin Klyman, Shayne Longpre, Sayash Kapoor, Rishi Bommasani, Marietje Schaake, Daniel E. Ho, Percy Liang, Daniel Zhang
Dec 13, 2023
issue brief

This brief highlights the benefits of open foundation models and calls for greater focus on their marginal risks.

Considerations for Governing Open Foundation Models

Ashwin Ramaswami, Arvind Narayanan, Kevin Klyman, Shayne Longpre, Sayash Kapoor, Rishi Bommasani, Marietje Schaake, Daniel E. Ho, Percy Liang, Daniel Zhang
Dec 13, 2023

This brief highlights the benefits of open foundation models and calls for greater focus on their marginal risks.

Foundation Models
issue brief
Responses to OMB's Request for Comment on Draft Policy Guidance on Agency Use of AI
Jennifer Pahlka, Amy Perez, Gerald Ray, Timothy O'Reilly, Todd Park, DJ Patil, Kit T. Rodolfa, Mariano-Florentino Cuéllar, Daniel E. Ho, Percy Liang
Nov 30, 2023
response to request

Scholars from Stanford RegLab and HAI submitted two responses to the Office of Management and Budget’s (OMB) request for comment on its draft policy guidance “Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence.”

Responses to OMB's Request for Comment on Draft Policy Guidance on Agency Use of AI

Jennifer Pahlka, Amy Perez, Gerald Ray, Timothy O'Reilly, Todd Park, DJ Patil, Kit T. Rodolfa, Mariano-Florentino Cuéllar, Daniel E. Ho, Percy Liang
Nov 30, 2023

Scholars from Stanford RegLab and HAI submitted two responses to the Office of Management and Budget’s (OMB) request for comment on its draft policy guidance “Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence.”

response to request
1
2
3
4
OSZAR »