Taylor Sorensen (he/him)
Hi! My name is Taylor Sorensen. I’m a Computer Science PhD student at the University of Washington, where I’m fortunate to be advised by Yejin Choi. I research natural language processing (NLP) and artificial intelligence (AI) and am especially interested in AI alignment and ethics, large language models, and NLP for social good. I love working with language because I feel it’s the best medium we have for communicating and understanding human intelligence, and I’m passionate about understanding how to make AI/language models work for positive world impact.
Previously, I received my BS in Applied Math and Computer Science at Brigham Young University. I also worked towards an MS, where I worked with David Wingate on a variety of problems ranging from NLP to machine learning to soft robotics until I left to pursue my PhD.
Historically, I’ve worked on a variety of problems ranging from computer vision to RL for soft robotics to ML-based quantitative investing to NLP for drug discovery.
Publications
Publications are listed in reverse chronological order. For a list of all publications, see my google scholar profile.
A Roadmap to Pluralistic Alignment
Taylor Sorensen, Jared Moore, Jillian Fisher, Mitchell Gordon, Niloofar Mireshghallah, Christopher Michael Rytting, Andre Ye, Liwei Jiang, Ximing Lu, Nouha Dziri, Tim Althoff, Yejin Choi Arxiv Preprint
PaperLeveraging AI for democratic discourse: Chat interventions can improve online political conversations at scale
Lisa P. Argyle, Christopher A. Bail, Ethan C. Busby, Joshua R. Gubler, Thomas Howe, Christopher Rytting, Taylor Sorensen, David Wingate
Published in PNAS
PaperValue Kaleidoscope: Engaging AI with Pluralistic Human Values, Rights, and Duties
Taylor Sorensen, Liwei Jiang, Jena Hwang, Sydney Levine, Valentina Pyatkin, Peter West, Nouha Dziri, Ximing Lu, Kavel Rao, Chandra Bhagavatula, Maarten Sap, John Tasioulas, Yejin Choi
AAAI 2024 Oral (top 3% of submissions)
Paper, Presentation, Demo, Code, Dataset, Model
NovaCOMET: Open Commonsense Foundation Models with Symbolic Knowledge Distillation
Peter West, Ronan Bras, Taylor Sorensen, Bill Lin, Liwei Jiang, Ximing Lu, Khyathi Chandu, Jack Hessel, Ashutosh Baheti, Chandra Bhagavatula, Yejin Choi
Findings of EMNLP 2023
PaperImpossible Distillation: from Low-Quality Model to High-Quality Dataset & Model for Summarization and Paraphrasing
Jaehun Jung, Peter West, Liwei Jiang, Faeze Brahman, Ximing Lu, Jillian Fisher, Taylor Sorensen, Yejin Choi
Arxiv Preprint
PaperTowards Coding Social Science Datasets with Language Models
Christopher Michael Rytting, Taylor Sorensen, Lisa Argyle, Ethan Busby, Nancy Fulda, Joshua Gubler, David Wingate
Arxiv Preprint
PaperPrompt Compression and Contrastive Conditioning for Controllability and Toxicity Reduction in Language Models
David Wingate, Mohammad Shoeybi, Taylor Sorensen
Findings of EMNLP 2022
Paper, CodeAn Information-theoretic Approach to Prompt Engineering Without Ground Truth Labels
Taylor Sorensen, Joshua Robinson, Christopher Michael Rytting, Alexander Glenn Shaw, Kyle Jeffrey Rogers, Alexia Pauline Delorey, Mahmoud Khalil, Nancy Fulda, David Wingate
ACL 2022
Paper, Code, PresentationNl-augmenter: A framework for task-sensitive natural language augmentation
Kaustubh D Dhole, Varun Gangal, Sebastian Gehrmann, …, Taylor Sorensen et al.
Arxiv Preprint
Paper, CodeUsing first principles for deep learning and model-based control of soft robots
Curtis C Johnson, Tyler Quackenbush, Taylor Sorensen, David Wingate, Marc D Killpack
Frontiers in Robotics and AI
Paper, Code
====== Website last updated: Feb 7, 2024