Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Tatsunori Hashimoto | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
peopleFaculty

Tatsunori Hashimoto

Assistant Professor, Stanford

External Bio

I am currently an assistant professor at the computer science department in Stanford university.

My research uses tools from statistics to make machine learning systems more robust and trustworthy — especially in complex systems such as large language models. The goal of my research is to use robustness and worst-case performance as a lens to understand and make progress on several fundamental challenges in machine learning and natural language processing. A few topics of recent interest are,

 
Share
Link copied to clipboard!

Latest Related to Tatsunori Hashimoto

policy brief

Demographic Stereotypes in Text-to-Image Generation

Aylin Caliskan, Debora Nozza, Myra Cheng, Pratyusha Kalluri, Esin Durmus, Faisal Ladhak, Federico Bianchi, Dan Jurafsky, Tatsunori Hashimoto, James Zou
Generative AIFoundation ModelsEthics, Equity, InclusionNov 30

In this brief, Stanford scholars test a variety of ordinary text prompts to examine how major text-to-image AI models encode a wide range of dangerous biases about demographic groups.

policy brief

Foundation Models and Copyright Questions

Xuechen Li, Peter Henderson, Mark A. Lemley, Dan Jurafsky, Tatsunori Hashimoto, Percy Liang
Foundation ModelsRegulation, Policy, GovernanceNov 02

Foundation models are often trained on large volumes of copyrighted material. In the United States, AI researchers have long relied on fair use doctrine to avoid copyright issues with training data. However, our U.S. case law analysis in this brief highlights that fair use is not guaranteed for foundation models and that the risk of copyright infringement is real, though the exact extent remains uncertain. We argue that the United States needs a two-pronged approach to addressing these copyright issues—a mix of legal and technical mitigations that will allow us to harness the positive impact of foundation models while reducing intellectual property harms to creators.

policy brief

Whose Opinions Do Language Models Reflect?

Cinoo Lee, Esin Durmus, Faisal Ladhak, Shibani Santurkar, Tatsunori Hashimoto, Percy Liang
Machine LearningSep 20

Since the November 2022 debut of ChatGPT, language models have been all over the news. But as people use chatbots—to write stories and look up recipes, to make travel plans and even further their real estate business—journalists, policymakers, and members of the public are increasingly paying attention to the important question of whose opinions these language models reflect. In particular, one emerging concern is that AI-generated text may be able to influence our views, including political beliefs, without us realizing it. This brief introduces a quantitative framework that allows policymakers to evaluate the behavior of language models to assess what kinds of opinions they reflect.

All Related

Extending the WILDS Benchmark for Unsupervised Adaptation
Ananya Kumar, Etienne David, Henrik Marklund, Ian Stavness, Irena Gao, Kendrick Shen, Pang Wei Koh, Sang Michael Xie, Sara Beery, Shiori Sagawa, Weihua Hu, Kate Saenko, Sergey Lev, Wei Guo, Tony Lee, Michihiro Yasunaga, Jure Leskovec, Tatsunori Hashimoto
Apr 24, 2022
Research
Your browser does not support the video tag.

Extending the WILDS Benchmark for Unsupervised Adaptation

Extending the WILDS Benchmark for Unsupervised Adaptation

Ananya Kumar, Etienne David, Henrik Marklund, Ian Stavness, Irena Gao, Kendrick Shen, Pang Wei Koh, Sang Michael Xie, Sara Beery, Shiori Sagawa, Weihua Hu, Kate Saenko, Sergey Lev, Wei Guo, Tony Lee, Michihiro Yasunaga, Jure Leskovec, Tatsunori Hashimoto
Apr 24, 2022

Extending the WILDS Benchmark for Unsupervised Adaptation

Your browser does not support the video tag.
Research
Large Language Models Can Be Strong Differentially Private Learners
Xuechen Li, Florian Tramèr, Tatsunori Hashimoto, Percy Liang
Jan 01, 2022
Research
Your browser does not support the video tag.

Large Language Models Can Be Strong Differentially Private Learners

Large Language Models Can Be Strong Differentially Private Learners

Xuechen Li, Florian Tramèr, Tatsunori Hashimoto, Percy Liang
Jan 01, 2022

Large Language Models Can Be Strong Differentially Private Learners

Your browser does not support the video tag.
Research
OSZAR »