I am an associate professor in the University of Maryland Computer Science Department (tenure home), Institute of Advanced Computer Studies, iSchool, and Language Science Center. Previously, I was an assistant professor at Colorado's Department of Computer Science (tenure granted in 2017). I was a graduate student at Princeton with David Blei.

My research focuses on making machine learning more useful, more interpretable, and able to learn and interact from humans. This helps users sift through decades of documents; discover when individuals lie, reframe, or change the topic in a conversation; or to compete against humans in games that are based in natural language.

Sign up for an appointment

Recent Publications

  • Chenglei Si, Zhe Gan, Zhengyuan Yang, Shuohang Wang, Jianfeng Wang, Jordan Boyd-Graber, and Lijuan Wang. Prompting GPT-3 To Be Reliable. International Conference on Learning Representations, 2023. [Code] [Bibtex]
  • Chenglei Si, Weijia Shi, Chen Zhao, Luke Zettlemoyer, and Jordan Lee Boyd-Graber. Getting MoRE out of Mixture of Language Model Reasoning Experts. Findings of Empirical Methods in Natural Language Processing, 2023. [Bibtex]
    Accessible Abstract: There are many ways for a computer to answer a question: a general knowledge question, a common sense question, or a math question. Each of these types of questions can be answered by a particular kind of expert. This paper investigates if we can automatically detect what kind of expert is best suited to answer a question and route the question to the correct expert.
  • YooYeon Sung, Naeemul Hassan, and Jordan Boyd-Graber. Not all Fake News is Written: A Dataset and Analysis of Misleading Video Headlines. Empirical Methods in Natural Language Processing, 2023. [Bibtex]
    Accessible Abstract: Misinformation online is not all text-based. More information is being consumed in video form, and both social media companies and external monitors need to know when misleading videos are being shared online. We create a new dataset of misleading videos and describe what makes the problem so challenging.
  • Sander V Schulhoff, Jeremy Pinto, Anaum Khan, Louis-François Bouchard, Chenglei Si, Jordan Lee Boyd-Graber, Svetlina Anati, Valen Tagliabue, Anson Liu Kost, and Christopher R Carnahan. Ignore This Title and HackAPrompt: Exposing Systemic Vulnerabilities of LLMs Through a Global Prompt Hacking Competition. Empirical Methods in Natural Language Processing, 2023. [Bibtex]
    Accessible Abstract: As more AI services online are provided by prompted language models, we need to be aware of the weaknesses and exploits of the models. We present the HackAPrompt competition to help elicit a broad array of exploits that get around large langauge models.
  • HyoJung Han, Marine Carpuat, and Jordan Boyd-Graber. Automatic Explicitation to Bridge the Background Knowledge Gap in Translation and its Evaluation with Multilingual QA. Empirical Methods in Natural Language Processing, 2023. [Bibtex]
    Accessible Abstract: Sometimes when you a translating from one language to another, a literal translation is not enough. Sometimes to actually understand what is being said, you need additional context. Professional translators know this, and the process that they use to help a listener is called "explicitation" to capturing cultural differences between source and target audiences. We introduce techniques for automatically generating explicitations, motivated by WikiExpl(a dataset collected from Wikipedia and annotate with human translators), and evaluate the explicitation.
  • Benjamin Börschinger, Jordan Boyd-Graber, Christian Buck, Jannis Bulian, Massimiliano Ciaramita, Michelle Chen Huebscher, Wojciech Gajewski, Yannic Kilcher, Rodrigo Nogueira, and Lierni Sestorain Saralegu. Meta Answering for Machine Reading. ArXiv, Preprint. [Preprint] [Bibtex]
  • Pedro Rodriguez, Shi Feng, Mohit Iyyer, He He, and Jordan Boyd-Graber. Quizbowl: The Case for Incremental Question Answering. ArXiv, Preprint. [Webpage] [Bibtex]
Jordan Boyd-Graber