Project Team

Better Extraction from Text Towards Enhanced Retrieval (BETTER)

Project funded by IARPA
PI: Jordan Boyd-Graber

In collaboration with Ben Van Durme and Michael Paul.


The Better Extraction from Text Towards Enhanced Retrieval (BETTER) Program will develop methods for extracting increasingly fine-grained semantic information, with a focus of events in the form of who-did-what-to-whom-whenwhere, across multiple languages and problem domains. This extracted information will be applied to an information retrieval task. An additional area of focus is human-in-the-loop computation. Performer systems will need the ability to incorporate human judgments for metrics such as relevancy and accuracy of extracted or retrieved information.

The UMD focused on improving representations for low resource langauges through using related languages and human interaction.

Project Team

Jordan Boyd-Graber Jordan Boyd-Graber
Assistant Professor, Computer Science (UMD)
Mozhi Zhang
PhD Student, Computer Science (UMD)

<< back to top

Publications (Selected)

  • Mozhi Zhang, Yoshinari Fujinuma, Michael J. Paul, and Jordan Boyd-Graber. Why Overfitting Isn't Always Bad: Retrofitting Cross-Lingual Word Embeddings to Dictionaries. Association for Computational Linguistics, 2020. [Preprint] [Video] [Code] [Bibtex]
  • Michelle Yuan, Hsuan-Tien Lin, and Jordan Boyd-Graber. Cold-start Active Learning through Self-Supervised Language Modeling. Empirical Methods in Natural Language Processing, 2020. [Video] [Bibtex]
  • Michelle Yuan, Mozhi Zhang, Benjamin Van Durme, Leah Findlater, and Jordan Boyd-Graber. Interactive Refinement of Cross-Lingual Word Embeddings. Empirical Methods in Natural Language Processing, 2020. [Video] [Bibtex]


Any opinions, findings, and conclusions or recommendations expressed in this material are those of the researchers and do not necessarily reflect the views of the sponsor.