Taylor Sorensen (he/him)

Hi! My name is Taylor Sorensen. I’m a Computer Science PhD student at the University of Washington, where I’m fortunate to be advised by Yejin Choi. I research natural language processing (NLP) and artificial intelligence (AI) and am especially interested in pluralistic alignment, large language models, and NLP for social good. I’m also a student researcher at Google DeepMind researching pluralistic alignment with the VOICES team.

Publications

Publications are listed in reverse chronological order. For a list of all publications, see my google scholar profile.

  • Can Language Models Reason about Individualistic Human Values and Preferences?
    Liwei Jiang, Taylor Sorensen, Sydney Levin, Yejin Choi
    Arxiv Preprint
    Paper

  • Modular Pluralism: Pluralistic Alignment via Multi-LLM Collaboration
    Shangbin Feng, Taylor Sorensen, Yuhan Liu, Jillian Fisher, Chan Young Park, Yejin Choi, Yulia Tsvetkov
    EMNLP 2024
    Paper

  • A Roadmap to Pluralistic Alignment
    Taylor Sorensen, Jared Moore, Jillian Fisher, Mitchell Gordon, Niloofar Mireshghallah, Christopher Michael Rytting, Andre Ye, Liwei Jiang, Ximing Lu, Nouha Dziri, Tim Althoff, Yejin Choi
    ICML 2024 Position Paper
    Paper, Featured in Jack Clark’s Import AI and Interconnects, Invited Talk

  • Leveraging AI for democratic discourse: Chat interventions can improve online political conversations at scale
    Lisa P. Argyle, Christopher A. Bail, Ethan C. Busby, Joshua R. Gubler, Thomas Howe, Christopher Rytting, Taylor Sorensen, David Wingate
    Published in PNAS
    Paper, Science Journal for Kids Adaptation

  • Value Kaleidoscope: Engaging AI with Pluralistic Human Values, Rights, and Duties
    Taylor Sorensen, Liwei Jiang, Jena Hwang, Sydney Levine, Valentina Pyatkin, Peter West, Nouha Dziri, Ximing Lu, Kavel Rao, Chandra Bhagavatula, Maarten Sap, John Tasioulas, Yejin Choi
    AAAI 2024 Oral (top 3% of submissions)
    Paper, Presentation, Demo, Code, Dataset, Model, Invited Talk

  • NovaCOMET: Open Commonsense Foundation Models with Symbolic Knowledge Distillation
    Peter West, Ronan Bras, Taylor Sorensen, Bill Lin, Liwei Jiang, Ximing Lu, Khyathi Chandu, Jack Hessel, Ashutosh Baheti, Chandra Bhagavatula, Yejin Choi
    Findings of EMNLP 2023
    Paper

  • Impossible Distillation: from Low-Quality Model to High-Quality Dataset & Model for Summarization and Paraphrasing
    Jaehun Jung, Peter West, Liwei Jiang, Faeze Brahman, Ximing Lu, Jillian Fisher, Taylor Sorensen, Yejin Choi
    NAACL 2024
    Paper

  • Towards Coding Social Science Datasets with Language Models
    Christopher Michael Rytting, Taylor Sorensen, Lisa Argyle, Ethan Busby, Nancy Fulda, Joshua Gubler, David Wingate
    Arxiv Preprint
    Paper

  • Prompt Compression and Contrastive Conditioning for Controllability and Toxicity Reduction in Language Models
    David Wingate, Mohammad Shoeybi, Taylor Sorensen
    Findings of EMNLP 2022
    Paper, Code

  • An Information-theoretic Approach to Prompt Engineering Without Ground Truth Labels
    Taylor Sorensen, Joshua Robinson, Christopher Michael Rytting, Alexander Glenn Shaw, Kyle Jeffrey Rogers, Alexia Pauline Delorey, Mahmoud Khalil, Nancy Fulda, David Wingate
    ACL 2022
    Paper, Code, Presentation

  • Nl-augmenter: A framework for task-sensitive natural language augmentation
    Kaustubh D Dhole, Varun Gangal, Sebastian Gehrmann, …, Taylor Sorensen et al.
    Arxiv Preprint
    Paper, Code

  • Using first principles for deep learning and model-based control of soft robots
    Curtis C Johnson, Tyler Quackenbush, Taylor Sorensen, David Wingate, Marc D Killpack
    Frontiers in Robotics and AI
    Paper, Code

Invited Talks

  • University College London Aligning AI with Pluralistic Human Values. Sep 2024
  • Vienna Alignment Workshop Pluralistic Alignment. July 2024
  • IBM Research AI and Pluralistic Human Values. March 2024
  • BuzzRobot AI Community Aligning AI with Pluralistic Human Values. May 2024 Recording

====== Website last updated: Oct 14, 2024