Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Upcoming Events | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Back to Upcoming Events

Previous Events at HAI

AllConferenceHAI SeminarsWorkshops
Workshop on Interactive AI Systems for Live Audiovisual Performance
WorkshopMar 05, 202510:00 AM - 5:00 PM
March
05
2025

Workshop on Interactive AI Systems for Live Audiovisual Performance

Mar 05, 202510:00 AM - 5:00 PM
Arts, Humanities
The First Workshop of a Public AI Assistant to World Wide Knowledge (WWK)
WorkshopFeb 13, 2025
February
13
2025

The Stanford Open Virtual Assistant Lab, with sponsorship from the Alfred P. Sloan Foundation and the Stanford Human-Centered Artificial Intelligence (HAI), is organizing an invitation-only workshop focused on the concept of a public AI Assistant to World Wide Knowledge (WWK) and its implications for the future of the Free Web.

The First Workshop of a Public AI Assistant to World Wide Knowledge (WWK)

Feb 13, 2025

The Stanford Open Virtual Assistant Lab, with sponsorship from the Alfred P. Sloan Foundation and the Stanford Human-Centered Artificial Intelligence (HAI), is organizing an invitation-only workshop focused on the concept of a public AI Assistant to World Wide Knowledge (WWK) and its implications for the future of the Free Web.

Trusting Digital Content in the Age of AI: How Might We Design Modern Information Ecosystems for Authenticity?
WorkshopOct 22, 20249:00 AM - 5:30 PM
October
22
2024

In this workshop we will ask: How might we design information systems for authenticity? We will bring together technologists, journalists, legal experts and archivists, for an interdisciplinary conversation about declining trust in digital content and how we might bolster trust in our information ecosystems.

Trusting Digital Content in the Age of AI: How Might We Design Modern Information Ecosystems for Authenticity?

Oct 22, 20249:00 AM - 5:30 PM

In this workshop we will ask: How might we design information systems for authenticity? We will bring together technologists, journalists, legal experts and archivists, for an interdisciplinary conversation about declining trust in digital content and how we might bolster trust in our information ecosystems.

Privacy, Safety, Security
Bridging the Farm: AI for Science at SLAC and Stanford
WorkshopOct 02, 2024
October
02
2024

This workshop will highlight the significant impact of AI applications in the Department of Energy (DOE) science by showcasing SLAC's research program, which includes national-scale science facilities such as particle accelerators, x-ray lasers, and the Rubin Observatory.

Bridging the Farm: AI for Science at SLAC and Stanford

Oct 02, 2024

This workshop will highlight the significant impact of AI applications in the Department of Energy (DOE) science by showcasing SLAC's research program, which includes national-scale science facilities such as particle accelerators, x-ray lasers, and the Rubin Observatory.

Sciences (Social, Health, Biological, Physical)
Stanford HAI 2024 Congressional Boot Camp on Artificial Intelligence
WorkshopAug 05, 2024
August
05
2024

Congressional staff play a key role in shaping and developing policy on critical technology areas such as artificial intelligence (AI).

Stanford HAI 2024 Congressional Boot Camp on Artificial Intelligence

Aug 05, 2024

Congressional staff play a key role in shaping and developing policy on critical technology areas such as artificial intelligence (AI).

Government, Public Administration
9th CSLI Workshop on Logic, Rationality & Intelligent Interaction
WorkshopJun 01, 2024
June
01
2024

Barwise Room, Cordura Hall

9th CSLI Workshop on Logic, Rationality & Intelligent Interaction

Jun 01, 2024

Barwise Room, Cordura Hall

Human Reasoning
Beyond Moderation: How Can We Use Technology to De-Escalate Political Conflict?
WorkshopNov 10, 20239:00 AM - 5:00 PM
November
10
2023

Beyond Moderation: How Can We Use Technology to De-Escalate Political Conflict?

Nov 10, 20239:00 AM - 5:00 PM
Workshop on Responsible and Open Foundation Models
WorkshopSep 21, 20238:00 AM - 2:30 PM
September
21
2023

In the last year, open foundation models have proliferated widely. Given the rapid adoption of these models, cultivating a responsible open source AI ecosystem is crucial and urgent. Our workshop presents an opportunity to learn from experts in different fields who have worked on responsible release strategies, risk mitigation, and policy interventions that can help.

Workshop on Responsible and Open Foundation Models

Sep 21, 20238:00 AM - 2:30 PM

In the last year, open foundation models have proliferated widely. Given the rapid adoption of these models, cultivating a responsible open source AI ecosystem is crucial and urgent. Our workshop presents an opportunity to learn from experts in different fields who have worked on responsible release strategies, risk mitigation, and policy interventions that can help.

Global AI: Reframing the Conversation
WorkshopApr 13, 20239:00 AM - 12:00 PM
April
13
2023

Global AI: Reframing the Conversation

Apr 13, 20239:00 AM - 12:00 PM
AI+Education Summit: AI in the Service of Teaching and Learning
WorkshopFeb 15, 20239:00 AM - 5:00 PM
February
15
2023

AI+Education Summit: AI in the Service of Teaching and Learning

Feb 15, 20239:00 AM - 5:00 PM
AI Training for U.S. General Services Administration
WorkshopOct 06, 2022
October
06
2022

AI Training for U.S. General Services Administration

Oct 06, 2022
Stanford HAI Congressional Boot Camp on Artificial Intelligence
WorkshopAug 08, 2022
August
08
2022

Congressional staff play a key role in shaping and developing policy on critical technology areas such as artificial intelligence (AI).

Stanford HAI Congressional Boot Camp on Artificial Intelligence

Aug 08, 2022

Congressional staff play a key role in shaping and developing policy on critical technology areas such as artificial intelligence (AI).

Government, Public Administration
Advancing Technology for a Sustainable Planet
WorkshopJul 25, 2022
July
25
2022

Advancing Technology for a Sustainable Planet

Jul 25, 2022
Data-Centric AI Virtual Workshop
WorkshopNov 18, 20219:00 AM - 11:05 AM
November
18
2021

Data-Centric AI Virtual Workshop

Nov 18, 20219:00 AM - 11:05 AM
AI and Labor Markets Roundtable
WorkshopMay 18, 202011:00 AM - 1:00 PM
May
18
2020

Faculty Leaders: Susan Athey and Erik Brynjolfsson

 

HAI Faculty Associate Director Susan Athey and new incoming HAI senior fellow Erik Brynjolfsson invited researchers working on AI and labor markets across the Stanford community to come together in a virtual event on May 18th, to present and discuss ongoing research and build ties for future collaborations.

How is AI impacting the labor market? How is it changing labor demand and supply, occupations, hiring, labor mobility,  firm organization, and behavior? Various groups from different disciplines across campus ranging from Economics, Business, Management Science and Engineering, Politics, Sociology or Computer Science are working on these important questions from different angles, with different lenses and methodologies. The workshop aimed to bring the working group together to enable cross-disciplinary discussions and inspire future research collaborations.

AI and Labor Markets Roundtable

May 18, 202011:00 AM - 1:00 PM

Faculty Leaders: Susan Athey and Erik Brynjolfsson

 

HAI Faculty Associate Director Susan Athey and new incoming HAI senior fellow Erik Brynjolfsson invited researchers working on AI and labor markets across the Stanford community to come together in a virtual event on May 18th, to present and discuss ongoing research and build ties for future collaborations.

How is AI impacting the labor market? How is it changing labor demand and supply, occupations, hiring, labor mobility,  firm organization, and behavior? Various groups from different disciplines across campus ranging from Economics, Business, Management Science and Engineering, Politics, Sociology or Computer Science are working on these important questions from different angles, with different lenses and methodologies. The workshop aimed to bring the working group together to enable cross-disciplinary discussions and inspire future research collaborations.

Psychology-Neuroscience-Artificial Intelligence, Part 2
WorkshopFeb 27, 202012:00 PM - 2:00 PM
February
27
2020

Psychology-Neuroscience-Artificial Intelligence, Part 2

Feb 27, 202012:00 PM - 2:00 PM
AI and International Security
WorkshopFeb 26, 20209:00 AM - 12:00 PM
February
26
2020

Workshop Leader: Emilie Silva

A one-day interdisciplinary workshop involving Stanford faculty and researchers, a select number of outside academics from other institutions, and a small number of private sector and governmental analysts to focus on the intersection of AI and various aspects of international security. The goal was to identify concrete research agendas and synergies, identify gaps in our understanding, and build a network of scholars and experts to address these challenges.

The past several years have seen startling advances in artificial intelligence and machine learning,
driven in part by advances in deep neural networks.3 AI-enabled machines can now meet or exceed
human abilities in a wide range of tasks, including chess, Jeopardy, Go, poker, object recognition,
and driving in some settings. AI systems are being applied to solve a range of problems in
transportation, finance, stock trading, health care, intelligence analysis, and cybersecurity. Despite
calls from prominent scientists to avoid militarizing AI,4 nation-states are certain to use AI and
machine learning tools for national security purposes.

A technology that has the potential for such sweeping changes across human society should be
evaluated for its potential effects on international stability. Many national security applications of
AI could be beneficial, such as advanced cyber defenses that can identify new malware, automated
computer security tools to find and patch vulnerabilities, or machine learning systems to uncover
suspicious behavior by terrorists. Current AI systems have substantial limitations and
vulnerabilities, however, and a headlong rush into national security applications of artificial
intelligence could pose risks to international stability. Some security related applications of AI
could be destabilizing, and competitive dynamics between nations could lead to harmful
consequences such as a “race to the bottom” on AI safety. Other security related applications of AI
could improve international stability.

CNAS is undertaking a two-year, in-depth, interdisciplinary project to examine how artificial
intelligence will influence international security and stability. It is critical for global stability to
begin to a discussion about ways to mitigate the risks while taking advantage of the benefits of
autonomous systems and artificial intelligence. This project will build a community from three
sectors of academia, business, and the policy world that often do not intersect – AI researchers in
academia and business; international security academic experts; and policy practitioners in the
government, both civilian and military. Through a series of workshops, commissioned papers, and
reports, this project will foster a community of practice and begin laying the foundations for a field
of study on AI and international security. The project will conclude with recommendations to
policymakers for ways to capitalize on the potential stabilizing benefits of artificial intelligence,
while avoiding uses that could undermine stability.

AI and International Security

Feb 26, 20209:00 AM - 12:00 PM

Workshop Leader: Emilie Silva

A one-day interdisciplinary workshop involving Stanford faculty and researchers, a select number of outside academics from other institutions, and a small number of private sector and governmental analysts to focus on the intersection of AI and various aspects of international security. The goal was to identify concrete research agendas and synergies, identify gaps in our understanding, and build a network of scholars and experts to address these challenges.

The past several years have seen startling advances in artificial intelligence and machine learning,
driven in part by advances in deep neural networks.3 AI-enabled machines can now meet or exceed
human abilities in a wide range of tasks, including chess, Jeopardy, Go, poker, object recognition,
and driving in some settings. AI systems are being applied to solve a range of problems in
transportation, finance, stock trading, health care, intelligence analysis, and cybersecurity. Despite
calls from prominent scientists to avoid militarizing AI,4 nation-states are certain to use AI and
machine learning tools for national security purposes.

A technology that has the potential for such sweeping changes across human society should be
evaluated for its potential effects on international stability. Many national security applications of
AI could be beneficial, such as advanced cyber defenses that can identify new malware, automated
computer security tools to find and patch vulnerabilities, or machine learning systems to uncover
suspicious behavior by terrorists. Current AI systems have substantial limitations and
vulnerabilities, however, and a headlong rush into national security applications of artificial
intelligence could pose risks to international stability. Some security related applications of AI
could be destabilizing, and competitive dynamics between nations could lead to harmful
consequences such as a “race to the bottom” on AI safety. Other security related applications of AI
could improve international stability.

CNAS is undertaking a two-year, in-depth, interdisciplinary project to examine how artificial
intelligence will influence international security and stability. It is critical for global stability to
begin to a discussion about ways to mitigate the risks while taking advantage of the benefits of
autonomous systems and artificial intelligence. This project will build a community from three
sectors of academia, business, and the policy world that often do not intersect – AI researchers in
academia and business; international security academic experts; and policy practitioners in the
government, both civilian and military. Through a series of workshops, commissioned papers, and
reports, this project will foster a community of practice and begin laying the foundations for a field
of study on AI and international security. The project will conclude with recommendations to
policymakers for ways to capitalize on the potential stabilizing benefits of artificial intelligence,
while avoiding uses that could undermine stability.

Psychology-Neuroscience-Artificial Intelligence, Part 1
WorkshopFeb 06, 202012:00 AM - 2:00 PM
February
06
2020

Psychology-Neuroscience-Artificial Intelligence, Part 1

Feb 06, 202012:00 AM - 2:00 PM
Sciences (Social, Health, Biological, Physical)
Uncertainty in AI
WorkshopDec 10, 20193:00 PM - 4:00 PM
December
10
2019

Faculty Leaders: Elaine Treharne and Mark Algee-Hewitt

 

This workshop focused on “Uncertainty in AI Situations” asks researchers to consider what
an AI can do when faced with uncertainty. Machine learning algorithms whose
classifications rely on posterior probabilities of membership often present ambiguous
results, where due to unavailable training data or ambiguous cases, the likelihood of any
outcome is approximately even. In such situations, the human programmers must decide
how the machine handles ambiguity: whether making a “best-fit” classification or reporting
potential error, there is always a potential conflict between the mathematical rigor of the
model and the ambiguity of real-world use cases.

Some questions asked that begin the process of advancing AI to a new intellectual understanding of the trickiest problems in the machine-learning environment.

• How do researchers create training sets that engage with uncertainty, particularly
when deciding between reflecting real-world data and curating data sets to avoid
bias?
• How can we frame ontologies, typologies, and epistemologies that can account for,
and help solve, ambiguity in data and indecision in AI?

Uncertainty in AI

Dec 10, 20193:00 PM - 4:00 PM

Faculty Leaders: Elaine Treharne and Mark Algee-Hewitt

 

This workshop focused on “Uncertainty in AI Situations” asks researchers to consider what
an AI can do when faced with uncertainty. Machine learning algorithms whose
classifications rely on posterior probabilities of membership often present ambiguous
results, where due to unavailable training data or ambiguous cases, the likelihood of any
outcome is approximately even. In such situations, the human programmers must decide
how the machine handles ambiguity: whether making a “best-fit” classification or reporting
potential error, there is always a potential conflict between the mathematical rigor of the
model and the ambiguity of real-world use cases.

Some questions asked that begin the process of advancing AI to a new intellectual understanding of the trickiest problems in the machine-learning environment.

• How do researchers create training sets that engage with uncertainty, particularly
when deciding between reflecting real-world data and curating data sets to avoid
bias?
• How can we frame ontologies, typologies, and epistemologies that can account for,
and help solve, ambiguity in data and indecision in AI?

Machine Learning
AI and Ethics
WorkshopOct 30, 20199:00 AM - 3:00 PM
October
30
2019

Faculty Leaders: Rob Reich and Seth Lazar

Conversations about ethics and AI are commonplace today, but they are often pitched at a high level of generality or abstraction. In this workshop, we gathered together leading young scholars, chiefly philosophers, to discuss a more detailed research agenda with a particular focus on moral and political philosophy and their intersections with AI.  Topics included AI and explainability, AI and value alignment, governance of AI, and more. 

AI and Ethics

Oct 30, 20199:00 AM - 3:00 PM

Faculty Leaders: Rob Reich and Seth Lazar

Conversations about ethics and AI are commonplace today, but they are often pitched at a high level of generality or abstraction. In this workshop, we gathered together leading young scholars, chiefly philosophers, to discuss a more detailed research agenda with a particular focus on moral and political philosophy and their intersections with AI.  Topics included AI and explainability, AI and value alignment, governance of AI, and more. 

1
2
OSZAR »