Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Machine Learning | Stanford HAI

Machine Learning

Learn about the latest advances in machine learning that allow systems to learn and improve over time.

Ronnie Chatterji | Seminar
SeminarMay 12, 202512:00 PM - 1:00 PM
May
12
2025
Event

Ronnie Chatterji | Seminar

May 12, 202512:00 PM - 1:00 PM
The AI Race Has Gotten Crowded—and China Is Closing In on the US
Wired
Apr 07, 2025
Media Mention

Vanessa Parli, Stanford HAI Director of Research and AI Index Steering Committee member, notes that the 2025 AI Index reports flourishing and higher-quality academic research in AI.

Media Mention
Your browser does not support the video tag.

The AI Race Has Gotten Crowded—and China Is Closing In on the US

Wired
Regulation, Policy, GovernanceEconomy, MarketsFinance, BusinessGenerative AIIndustry, InnovationMachine LearningSciences (Social, Health, Biological, Physical)Apr 07

Vanessa Parli, Stanford HAI Director of Research and AI Index Steering Committee member, notes that the 2025 AI Index reports flourishing and higher-quality academic research in AI.

Stories for the Future 2024
Isabelle Levent
Deep DiveMar 31, 2025
Research

We invited 11 sci-fi filmmakers and AI researchers to Stanford for Stories for the Future, a day-and-a-half experiment in fostering new narratives about AI. Researchers shared perspectives on AI and filmmakers reflected on the challenges of writing AI narratives. Together researcher-writer pairs transformed a research paper into a written scene. The challenge? Each scene had to include an AI manifestation, but could not be about the personhood of AI or AI as a threat. Read the results of this project.

Research

Stories for the Future 2024

Isabelle Levent
Machine LearningGenerative AIArts, HumanitiesCommunications, MediaDesign, Human-Computer InteractionSciences (Social, Health, Biological, Physical)Deep DiveMar 31

We invited 11 sci-fi filmmakers and AI researchers to Stanford for Stories for the Future, a day-and-a-half experiment in fostering new narratives about AI. Researchers shared perspectives on AI and filmmakers reflected on the challenges of writing AI narratives. Together researcher-writer pairs transformed a research paper into a written scene. The challenge? Each scene had to include an AI manifestation, but could not be about the personhood of AI or AI as a threat. Read the results of this project.

Whose Opinions Do Language Models Reflect?
Esin Durmus, Tatsunori Hashimoto, Faisal Ladhak, Cinoo Lee, Percy Liang, Shibani Santurkar
Sep 20, 2023
Policy Brief

Since the November 2022 debut of ChatGPT, language models have been all over the news. But as people use chatbots—to write stories and look up recipes, to make travel plans and even further their real estate business—journalists, policymakers, and members of the public are increasingly paying attention to the important question of whose opinions these language models reflect. In particular, one emerging concern is that AI-generated text may be able to influence our views, including political beliefs, without us realizing it. This brief introduces a quantitative framework that allows policymakers to evaluate the behavior of language models to assess what kinds of opinions they reflect.

Policy Brief

Whose Opinions Do Language Models Reflect?

Esin Durmus, Tatsunori Hashimoto, Faisal Ladhak, Cinoo Lee, Percy Liang, Shibani Santurkar
Machine LearningSep 20

Since the November 2022 debut of ChatGPT, language models have been all over the news. But as people use chatbots—to write stories and look up recipes, to make travel plans and even further their real estate business—journalists, policymakers, and members of the public are increasingly paying attention to the important question of whose opinions these language models reflect. In particular, one emerging concern is that AI-generated text may be able to influence our views, including political beliefs, without us realizing it. This brief introduces a quantitative framework that allows policymakers to evaluate the behavior of language models to assess what kinds of opinions they reflect.

Percy Liang
Person
Percy Liang
Person
Percy Liang

Percy Liang

Foundation ModelsGenerative AIMachine LearningNatural Language ProcessingOct 05
Alex Tamkin | Seminar
SeminarMay 19, 202512:00 PM - 1:00 PM
May
19
2025
Event

Alex Tamkin | Seminar

May 19, 202512:00 PM - 1:00 PM

All Work Published on Machine Learning

Here are 3 Big Takeaways from Stanford's AI Index Report
Tech Brew
Apr 07, 2025
Media Mention

Vanessa Parli, HAI Director of Research and AI Index Steering Committee member, speaks about the biggest takeaways from the 2025 AI Index Report.

Here are 3 Big Takeaways from Stanford's AI Index Report

Tech Brew
Apr 07, 2025

Vanessa Parli, HAI Director of Research and AI Index Steering Committee member, speaks about the biggest takeaways from the 2025 AI Index Report.

Sciences (Social, Health, Biological, Physical)
Machine Learning
Regulation, Policy, Governance
Industry, Innovation
Media Mention
The Promise and Perils of Artificial Intelligence in Advancing Participatory Science and Health Equity in Public Health
Abby C King, Zakaria N Doueiri, Ankita Kaulberg, Lisa Goldman Rosas
Feb 14, 2025
Research
Your browser does not support the video tag.

Current societal trends reflect an increased mistrust in science and a lowered civic engagement that threaten to impair research that is foundational for ensuring public health and advancing health equity. One effective countermeasure to these trends lies in community-facing citizen science applications to increase public participation in scientific research, making this field an important target for artificial intelligence (AI) exploration. We highlight potentially promising citizen science AI applications that extend beyond individual use to the community level, including conversational large language models, text-to-image generative AI tools, descriptive analytics for analyzing integrated macro- and micro-level data, and predictive analytics. The novel adaptations of AI technologies for community-engaged participatory research also bring an array of potential risks. We highlight possible negative externalities and mitigations for some of the potential ethical and societal challenges in this field.

The Promise and Perils of Artificial Intelligence in Advancing Participatory Science and Health Equity in Public Health

Abby C King, Zakaria N Doueiri, Ankita Kaulberg, Lisa Goldman Rosas
Feb 14, 2025

Current societal trends reflect an increased mistrust in science and a lowered civic engagement that threaten to impair research that is foundational for ensuring public health and advancing health equity. One effective countermeasure to these trends lies in community-facing citizen science applications to increase public participation in scientific research, making this field an important target for artificial intelligence (AI) exploration. We highlight potentially promising citizen science AI applications that extend beyond individual use to the community level, including conversational large language models, text-to-image generative AI tools, descriptive analytics for analyzing integrated macro- and micro-level data, and predictive analytics. The novel adaptations of AI technologies for community-engaged participatory research also bring an array of potential risks. We highlight possible negative externalities and mitigations for some of the potential ethical and societal challenges in this field.

Foundation Models
Generative AI
Machine Learning
Natural Language Processing
Sciences (Social, Health, Biological, Physical)
Healthcare
Your browser does not support the video tag.
Research
Improving Transparency in AI Language Models: A Holistic Evaluation
Rishi Bommasani, Daniel Zhang, Tony Lee, Percy Liang
Feb 28, 2023
Issue Brief

In this brief, Stanford scholars introduce Holistic Evaluation of Language Models (HELM) as a framework to evaluate commercial application of AI use cases.

Improving Transparency in AI Language Models: A Holistic Evaluation

Rishi Bommasani, Daniel Zhang, Tony Lee, Percy Liang
Feb 28, 2023

In this brief, Stanford scholars introduce Holistic Evaluation of Language Models (HELM) as a framework to evaluate commercial application of AI use cases.

Machine Learning
Foundation Models
Issue Brief
Stanford HAI's 2025 AI Index Reveals Record Growth in AI Capabilities, Investment, and Regulation
Yahoo Finance
Apr 07, 2025
Media Mention

"The AI Index equips policymakers, researchers, and the public with the data they need to make informed decisions — and to ensure AI is developed with human-centered values at its core," says Russell Wald, Executive Director of Stanford HAI and Steering Committee member of the AI Index.

Stanford HAI's 2025 AI Index Reveals Record Growth in AI Capabilities, Investment, and Regulation

Yahoo Finance
Apr 07, 2025

"The AI Index equips policymakers, researchers, and the public with the data they need to make informed decisions — and to ensure AI is developed with human-centered values at its core," says Russell Wald, Executive Director of Stanford HAI and Steering Committee member of the AI Index.

Economy, Markets
Machine Learning
Regulation, Policy, Governance
Workforce, Labor
Industry, Innovation
Sciences (Social, Health, Biological, Physical)
Ethics, Equity, Inclusion
Media Mention
Policy-Shaped Prediction: Avoiding Distractions in Model-Based Reinforcement Learning
Nicholas Haber, Miles Huston, Isaac Kauvar
Dec 13, 2024
Research
Your browser does not support the video tag.

Model-based reinforcement learning (MBRL) is a promising route to sampleefficient policy optimization. However, a known vulnerability of reconstructionbased MBRL consists of scenarios in which detailed aspects of the world are highly predictable, but irrelevant to learning a good policy. Such scenarios can lead the model to exhaust its capacity on meaningless content, at the cost of neglecting important environment dynamics. While existing approaches attempt to solve this problem, we highlight its continuing impact on leading MBRL methods —including DreamerV3 and DreamerPro — with a novel environment where background distractions are intricate, predictable, and useless for planning future actions. To address this challenge we develop a method for focusing the capacity of the world model through synergy of a pretrained segmentation model, a task-aware reconstruction loss, and adversarial learning. Our method outperforms a variety of other approaches designed to reduce the impact of distractors, and is an advance towards robust model-based reinforcement learning.

Policy-Shaped Prediction: Avoiding Distractions in Model-Based Reinforcement Learning

Nicholas Haber, Miles Huston, Isaac Kauvar
Dec 13, 2024

Model-based reinforcement learning (MBRL) is a promising route to sampleefficient policy optimization. However, a known vulnerability of reconstructionbased MBRL consists of scenarios in which detailed aspects of the world are highly predictable, but irrelevant to learning a good policy. Such scenarios can lead the model to exhaust its capacity on meaningless content, at the cost of neglecting important environment dynamics. While existing approaches attempt to solve this problem, we highlight its continuing impact on leading MBRL methods —including DreamerV3 and DreamerPro — with a novel environment where background distractions are intricate, predictable, and useless for planning future actions. To address this challenge we develop a method for focusing the capacity of the world model through synergy of a pretrained segmentation model, a task-aware reconstruction loss, and adversarial learning. Our method outperforms a variety of other approaches designed to reduce the impact of distractors, and is an advance towards robust model-based reinforcement learning.

Machine Learning
Foundation Models
Your browser does not support the video tag.
Research
Summary of AI Provisions from the National Defense Authorization Act 2023
Stanford HAI
Jan 01, 2023
Explainer

Summary of AI Provisions from the National Defense Authorization Act 2023

Stanford HAI
Jan 01, 2023
Machine Learning
Law Enforcement and Justice
Explainer
1
2
3
4
OSZAR »