I am a Ph.D. candidate in Computer Science at the University of Maryland. My advisor is Jordan Boyd-Graber. I am also a member of CLIP lab.
I study natural language processing and machine learning. Currently, I focus on building NLP models with limited data, using cross-lingual and human-in-the-loop methods.
During my PhD, I visited New York University (hosted by Kyunghyun Cho) and interned at Microsoft Research (LIT group). Previously, I completed my B.S. and M.S.E. at the Johns Hopkins University, where I worked with Jason Eisner and David Yarowsky.
* = equal contribution
Noisy Labels Can Induce Good Representations
Jingling Li, Mozhi Zhang, Keyulu Xu, John P. Dickerson, Jimmy Ba
arXiv preprint
arxiv
bibtex
How Neural Networks Extrapolate: From Feedforward to Graph Neural Networks
Keyulu Xu, Mozhi Zhang, Jingling Li, Simon S. Du, Ken-ichi Kawarabayashi, Stefanie Jegelka
ICLR 2021
(Oral)
arxiv
bibtex
Interactive Refinement of Cross-Lingual Word Embeddings
Michelle Yuan*, Mozhi Zhang*, Benjamin Van Durme, Leah Findlater, Jordan Boyd-Graber
EMNLP 2020
arxiv
bibtex
code
video
Why Overfitting Isn't Always Bad: Retrofitting Cross-Lingual Word Embeddings to Dictionaries
Mozhi Zhang*, Yoshinari Fujinuma*, Michael J. Paul, Jordan Boyd-Graber
ACL 2020
arxiv
bibtex
code
video
What Can Neural Networks Reason About?
Keyulu Xu, Jingling Li, Mozhi Zhang, Simon S. Du, Ken-ichi Kawarabayashi, Stefanie Jegelka
ICLR 2020
(Spotlight)
arxiv
bibtex
code
Exploiting Cross-Lingual Subword Similarities in Low-Resource Document Classification
Mozhi Zhang, Yoshinari Fujinuma, Jordan Boyd-Graber
AAAI 2020
arxiv
bibtex
Are Girls Neko or Shōjo? Cross-Lingual Alignment of Non-Isomorphic Embeddings with Iterative Normalization
Mozhi Zhang, Keyulu Xu, Ken-ichi Kawarabayashi, Stefanie Jegelka, Jordan Boyd-Graber
ACL 2019
arxiv
bibtex
code