I am an associate professor in the University of Maryland Computer Science Department (tenure home), Institute of Advanced Computer Studies, iSchool, and Language Science Center. Previously, I was an assistant professor at Colorado's Department of Computer Science (tenure granted in 2017). I was a graduate student at Princeton with David Blei.

My research focuses on making machine learning more useful, more interpretable, and able to learn and interact from humans. This helps users sift through decades of documents; discover when individuals lie, reframe, or change the topic in a conversation; or to compete against humans in games that are based in natural language.

Sign up for an appointment

Recent Publications

  • Chen Zhao, Chenyan Xiong, Xin Qian, and Jordan Boyd-Graber. Complex Factoid Question Answering with a Free-Text Knowledge Graph. The Web Conference, 2020. [Bibtex]
  • Fenfei Guo, Jordan Boyd-Graber, Mohit Iyyer, and Leah Findlater. Which Evaluations Uncover Sense Representations that Actually Make Sense?. Linguistic Resources and Evaluation Conference, 2020. [Bibtex]
  • Alison Smith, Varun Kumar, Jordan Boyd-Graber, Kevin Seppi, and Leah Findlater. Digging into User Control: Perceptions of Adherence and Instability in Transparent Models. Intelligent User Interfaces, 2020. [Bibtex]
  • Tianze Shi, Chen Zhao, Jordan Boyd-Graber, Hal Daumé III, and Lillian Lee. On the Potential of Lexico-logical Alignments for Semantic Parsing to SQL Queries. Findings of EMNLP, 2020. [Bibtex]
  • Wenyan Li, Alvin Grissom II, and Jordan Boyd-Graber. An Attentive Recurrent Model for Incremental Prediction of Sentence-final Verbs. Findings of EMNLP, 2020. [Bibtex]
  • Michelle Yuan, Mozhi Zhang, Benjamin Van Durme, Leah Findlater, and Jordan Boyd-Graber. Interactive Refinement of Cross-Lingual Word Embeddings. Empirical Methods in Natural Language Processing, 2020. [Bibtex]
  • Michelle Yuan, Hsuan-Tien Lin, and Jordan Boyd-Graber. Cold-start Active Learning through Self-Supervised Language Modeling. Empirical Methods in Natural Language Processing, 2020. [Bibtex]
  • Alison Smith, Jordan Boyd-Graber, Ron Fan, Melissa Birchfield, Tongshuang Wu, Dan Weld, and Leah Findlater. No Explainability without Accountability: An Empirical Study of Explanations and Feedback in Interactive ML. Computer-Human Interaction, 2020. [Bibtex]
  • Mozhi Zhang, Yoshinari Fujinuma, and Jordan Boyd-Graber. Exploiting Cross-Lingual Subword Similarities in Low-Resource Document Classification. Association for the Advancement of Artificial Intelligence, 2020. [Bibtex]
  • Mozhi Zhang, Yoshinari Fujinuma, Michael J. Paul, and Jordan Boyd-Graber. Why Overfitting Isn't Always Bad: Retrofitting Cross-Lingual Word Embeddings to Dictionaries. Association for Computational Linguistics, 2020. [Preprint] [Video] [Code] [Bibtex]
  • Jordan Boyd-Graber and Benjamin Börschinger. What Question Answering can Learn from Trivia Nerds. Association for Computational Linguistics, 2020. [Preprint] [Video] [Bibtex]
  • Denis Peskov, Benny Cheng, Ahmed Elgohary, Joe Barrow, Cristian Danescu-Niculescu-Mizil, and Jordan Boyd-Graber. It Takes Two to Lie: One to Lie and One to Listen. Association for Computational Linguistics, 2020. [Video] [Data and Code] [Bibtex]
  • Benjamin Börschinger, Jordan Boyd-Graber, Christian Buck, Jannis Bulian, Massimiliano Ciaramita, Michelle Chen Huebscher, Wojciech Gajewski, Yannic Kilcher, Rodrigo Nogueira, and Lierni Sestorain Saralegu. Meta Answering for Machine Reading. ArXiv, Preprint. [Preprint] [Bibtex]
  • Pedro Rodriguez, Shi Feng, Mohit Iyyer, He He, and Jordan Boyd-Graber. Quizbowl: The Case for Incremental Question Answering. ArXiv, Preprint. [Webpage] [Bibtex]
Jordan Boyd-Graber