Hello. I am a computer science Ph.D. candidate in the Courant Institute of Mathematical Sciences at New York University. I am a member of the Machine Learning for Language (ML²) group (subset of the CILVR group). I am advised by Prof. He He and Prof. Kyunghyun Cho.
My research focuses on natural langauge processing and machine learning. Specifically, recent interests include text generation, structured prediction, and long text understanding.
Prior to my Ph.D., I graduated from the University of Chicago (B.S. in mathematics and B.S. in computer science). In Chicago, my advisor was Prof. Kevin Gimpel at Toyota Technological Institute at Chicago (TTIC) and the University of Chicago. In Summer 2020, I was a research intern at Google Research New York; in Summer 2021, I was a research intern at Google Brain.
Research
Primary research subfields: text generation (including machine translation), structured prediction, language understanding, and pretraining.
[semantic scholar] [google scholar] [dblp] [abbreviations]
Publications and preprints (2021-)
Amortized Noisy Channel Neural Machine Translation
Richard Yuanzhe Pang, He He, Kyunghyun Cho
In Proceedings of INLG 2022
[paper] [talk] [poster] [bibtex]
SQuALITY: Building a Long-Document Summarization Dataset the Hard Way
Alex Wang, Richard Yuanzhe Pang, Angelica Chen, Jason Phang, Samuel R. Bowman
Preprint, May 2022
[paper] [code] [bibtex]
QuALITY: Question Answering with Long Input Texts, Yes!
Richard Yuanzhe Pang*, Alicia Parrish*, Nitish Joshi*, Nikita Nangia, Jason Phang, Angelica Chen, Vishakh Padmakumar, Johnny Ma, Jana Thompson, He He, Samuel R. Bowman
In Proceedings of NAACL 2022
[paper] [abstract] [data] [code] [leaderboard] [15-min live talk] [slides] [bibtex] | by others: [tfds] [forecast] [press] [scrolls]
Token Dropping for Efficient BERT Pretraining
Le Hou*, Richard Yuanzhe Pang*, Tianyi Zhou, Yuexin Wu, Xinying Song, Xiaodan Song, Denny Zhou
In Proceedings of ACL 2022
[paper] [abstract] [code] [talk] [bibtex]
AgreeSum: Agreement-Oriented Multi-Document Summarization
Richard Yuanzhe Pang*, Adam D. Lelkes*, Vinh Q. Tran*, Cong Yu
In Findings of ACL 2021
[paper] [abstract] [data] [short talk] [bibtex]
Comparing Test Sets with Item Response Theory
Clara Vania*, Phu Mon Htut*, William Huang*, Dhara Mungra, Richard Yuanzhe Pang, Jason Phang, Haokun Liu, Kyunghyun Cho, Samuel R. Bowman
In Proceedings of ACL 2021
[paper] [abstract] [code] [bibtex]
Text Generation by Learning from Demonstrations
Richard Yuanzhe Pang, He He
In Proceedings of ICLR 2021
tl;dr: a high-precision-generation training objective to address the train/test objective mismatch and history mismatch
[paper] [abstract] [openreview] [poster] [slides] [code] [discussion] [bibtex]
Publications (-2020)
Improving Joint Training of Inference Networks and Structured Prediction Energy Networks
Lifu Tu, Richard Yuanzhe Pang, Kevin Gimpel
In Proceedings of EMNLP 2020 Workshop on Structured Prediction for NLP (SPNLP); spotlight paper
tl;dr: improving fast approximate+amortized inference for energy-based models in NLP structured prediction
[paper] [abstract] [my slides] [bibtex]
Consistency of a Recurrent Language Model With Respect to Incomplete Decoding
Sean Welleck*, Ilia Kulikov*, Jaedeok Kim, Richard Yuanzhe Pang, Kyunghyun Cho
In Proceedings of EMNLP 2020
Also appearing in the non-archival DeepMath 2020
[paper] [abstract] [code] [bibtex]
ENGINE: Energy-Based Inference Networks for Non-Autoregressive Machine Translation
Lifu Tu, Richard Yuanzhe Pang, Sam Wiseman, Kevin Gimpel
In Proceedings of ACL 2020
tl;dr: a "soft" form of knowledge distillation for non-autoregressive MT
[paper] [abstract] [code] [bibtex]
Intermediate-Task Transfer Learning with Pretrained Language Models: When and Why Does It Work?
Yada Pruksachatkun*, Jason Phang*, Haokun Liu*, Phu Mon Htut*, Xiaoyi Zhang, Richard Yuanzhe Pang, Clara Vania, Katharina Kann, Samuel R. Bowman
In Proceedings of ACL 2020
[paper] [abstract] [bibtex]
Unsupervised Evaluation Metrics and Learning Criteria for Non-Parallel Textual Transfer
Richard Yuanzhe Pang, Kevin Gimpel
In Proceedings of EMNLP 2019 Workshop on Neural Generation and Translation (WNGT)
tl;dr: proposing more dimensions for textual transfer evaluation metrics, and losses that target them
[paper] [supplementals] [abstract] [poster] [bibtex]
The Daunting Task of Real-World Textual Style Transfer Auto-Evaluation
More info: [semantic scholar] [google scholar] [dblp] [abbreviations]
Discussion of GOLD [pdf]
As a reviewer / program committee member
External
At New York University
At the University of Chicago
Selected presentations
Other conference presentations with associated proceeding papers
Last updated: July 21, 2022. Contact: My NYU office is at 60 5th Ave. Get in touch at yzpang at _ dot edu (where _ is nyu or uchicago)!
Richard Yuanzhe Pang
Extended abstract in EMNLP 2019 Workshop on Neural Generation and Translation (WNGT); abstract in Proceedings of the Workshop on Noisy User-generated Text (W-NUT)
tl;dr: an opinion piece arguing that the research on textual style transfer and its evaluation are going astray
[paper] [abstract] [poster] [bibtex]
Discussion
June 2022
tl;dr: GOLD does not maximize the expected reward. It maximizes the expected reward of training examples only.
More research activities
Teaching
Presentations
Please email for full CV.