Overview
Project Team
Publications
Software
Project funded by the National Science Foundation
(Originally IIS-1652666
at U Colorado,
now 1822494
at UMD)
PI: Jordan Boyd-Graber,
University of Maryland
This CAREER project investigates how humans and computers can work together to answer questions. Humans and computers possess complementary skills: humans have extensive commonsense understanding of the world and greater facility with unconventional language, while computers can effortlessly memorize countless facts and retrieve them in an instant. This proposal helps machines understand who people, places, and characters are; how to communicate this information to humans; and how to allow humans and computers to collaborate in question answering using limited information. A key component of this proposal is answering questions word-by-word: this forces both humans and computers to answer questions using information as efficiently as possible. In addition to embedding these skills in question answering tasks, this proposal has an extensive outreach program to exhibit this technology in interactive question answering competitions for high school and college students.
This research is possible by a new representations of entities in a medium-dimensional embedding that encodes relationships between entities (e.g., the representation of "Goodluck Jonathan" and "Nigeria" encodes that the former is the leader of the latter) to enable the system to answer questions about Nigeria. We validate the effectiveness of these representations both through traditional question answering evaluations and through interactive experiments with human collaboration to ensure that we can visualize these representations effectively. In addition to helping train computers to answer questions, we use opponent modeling and reinforcement learning to help train humans to better answer questions.
<< back to top
Jordan Boyd-Graber Assistant Professor, Computer Science | |
Ahmed Elgohary Ph.D. student, Computer Science |
|
Shi Feng Ph.D. student, Computer Science |
|
HyoJung Han Ph.D. student, Computer Science |
|
Wanrong He Undergrad (Tsinghua), Computer Science |
|
Pedro Rodriguez Ph.D. student, Computer Science |
|
Matthew Shu Undergrad (Yale), Computer Science |
|
Yoo Yeon Sung PhD Student, iSchool |
|
Eric Wallace Undergrad, Computer Science |
|
Chen Zhao Ph.D. student, Computer Science |
<< back to top
@article{Rodriguez:Feng:Iyyer:He:Boyd-Graber-Preprint, Title = {Quizbowl: The Case for Incremental Question Answering}, Author = {Pedro Rodriguez and Shi Feng and Mohit Iyyer and He He and Jordan Boyd-Graber}, Journal = {ArXiv}, Year = {Preprint}, Url = {https://arxiv.org/abs/1904.04792}, }
@inproceedings{Feng:Boyd-Graber-2022, Title = {Learning to Explain Selectively: A Case Study on Question Answering}, Author = {Shi Feng and Jordan Boyd-Graber}, Booktitle = {Empirical Methods in Natural Language Processing}, Year = {2022}, Location = {Abu Dhabi}, Url = {http://umiacs.umd.edu/~jbg//docs/2022_emnlp_augment.pdf}, }
Accessible Abstract: Many AI methods are a black box: input goes in, predictions come out. While there are many AI explanation tools that you can add to these predictions, how do you know if they are any good. In this work presented at EMNLP, if you put a human in front of a AI that's trying to answer questions, our hypothesis is that you can measure how good the underlying explanations are by how much the human's score goes up. This 2022 EMNLP publication not just measures which combinations of explanations are most effective for an individual. We use bandit exploration to quickly figure out what set of explanations best help a specific user.
@inproceedings{Han:Carpuat:Boyd-Graber-2022, Title = {SimQA: Detecting Simultaneous MT Errors through Word-by-Word Question Answering}, Author = {HyoJung Han and Marine Carpuat and Jordan Boyd-Graber}, Booktitle = {Empirical Methods in Natural Language Processing}, Year = {2022}, Location = {Abu Dhabi}, Url = {http://umiacs.umd.edu/~jbg//docs/2022_emnlp_simqa.pdf}, }
Accessible Abstract: Simultaneous interpretation (where a translation happens word by word before the source sentence is finished) is difficult to evaluate. We created a new evaluation framework based on the following scenario: imagine that you're thrown into a trivia gameshow where you don't know the language. Specifically, it's a game format where you interrupt the question word by word as soon as possible. Our hypothesis is that a monolingual player (who doesn't speak the source language) will be able to do better in the game with a better simultaneous translation system. In this 2022 EMNLP publication, we show that this evaluation is not only cheaper (you just need to translate the answer) but can also detect hallucinations and undertranslations better than existing evaluation methods.
@article{He:Mao:Boyd-Graber-2022, Title = {Cheater's Bowl: Human vs. Computer Search Strategies for Open-Domain QA}, Author = {Wanrong He and Andrew Mao and Jordan Boyd-Graber}, Journal = {Findings of Empirical Methods in Natural Language Processing}, Year = {2022}, Location = {Abu Dhabi}, Url = {http://umiacs.umd.edu/~jbg//docs/2022_emnlp_cheaters.pdf}, }
Accessible Abstract: When the Covid pandemic it, trivia games moved online. With it came cheating: people tried to quickly Google answers. This is bad for sportsmanship, but a good source of training data for helping teach computers how to find answers. We built an interface to harvest this training data from trivia players, fed these into retrieval-based QA systems, showing that these queries were better than the automatically generated queries used by the current state of the art.
@article{Si:Zhao:Min:Boyd-Graber-2022, Title = {Re-Examining Calibration: The Case of Question Answering}, Author = {Chenglei Si and Chen Zhao and Sewon Min and Jordan Boyd-Graber}, Journal = {Findings of Empirical Methods in Natural Language Processing}, Year = {2022}, Location = {Abu Dhabi}, Url = {http://umiacs.umd.edu/~jbg//docs/2022_emnlp_calibration.pdf}, }
Accessible Abstract: Calibration is an important problem in question answering: if a search engine or virtual assistant doesn't know the answer to a question, you should probably abstain from showing an answer (to save embarassment, as when Google said a horse had six legs). This EMNLP Findings paper shows that existing metrics to test how good a QA calibration push calibrated confidence toward the average confidence. We proposed an alternate method both for evaluation and to generate better calibration by looking how models change as they learn.
@inproceedings{Rodriguez:Barrow:Hoyle:Lalor:Jia:Boyd-Graber-2021, Title = {Evaluation Examples Are Not Equally Informative: How Should That Change NLP Leaderboards?}, Author = {Pedro Rodriguez and Joe Barrow and Alexander Hoyle and John P. Lalor and Robin Jia and Jordan Boyd-Graber}, Booktitle = {Association for Computational Linguistics}, Year = {2021}, Url = {http://umiacs.umd.edu/~jbg//docs/2021_acl_leaderboard.pdf}, }
Accessible Abstract: When can we call an AI "intelligent"? Just like humans, a common approach is to ask them a bunch of questions. These questions posed to modern machine learning methods are collected in metrics called leaderboards to monitor progress, but beyond ranking approaches, this does not help us better understand our problems or our systems very well. This paper introduces probabilistic models inspired by psychometric approaches called item response theory models (think year-end standardized tests) to better understand how computers can answer questions and whether we are asking the right questions. This allows researchers to better compare what kinds of questions systems can answer, better compare human and machine ability, and discover problematic questions (e.g., questions that have incorrect answer keys, are vague, or "trick" those trying to answer the questions).
@inproceedings{Zhao:Xiong:Boyd-Graber:Daume-III-2021, Title = {Distantly-Supervised Dense Retrieval Enables Open-Domain Question Answering without Evidence Annotation}, Author = {Chen Zhao and Chenyan Xiong and Jordan Boyd-Graber and Hal {Daum\'{e} III}}, Booktitle = {Empirical Methods in Natural Language Processing}, Year = {2021}, Location = {Punta Cana}, Url = {http://umiacs.umd.edu/~jbg//docs/2021_emnlp_weak_dpr.pdf}, }
Accessible Abstract: Answering questions sometimes requires tying multiple pieces of information together. Previous datasets have required annotators to explicitly build these reasoning chains (e.g., to answer "where do I know the cop from Die Hard from", you need to figure out that the actor's name is "Reginald VelJohnson" and then find out that he's best known as the dad on Family Matters.). By exploring search queries that get to the right answer, we're able to answer these questions without expensive annotation.
@inproceedings{Rodriguez:Boyd-Graber-2021, Title = {Evaluation Paradigms in Question Answering}, Author = {Pedro Rodriguez and Jordan Boyd-Graber}, Location = {Punta Cana}, Booktitle = {Empirical Methods in Natural Language Processing}, Year = {2021}, Url = {http://umiacs.umd.edu/~jbg//docs/2021_emnlp_paradigms.pdf}, }
Accessible Abstract: Why do we answer questions? Sometimes it's to provide information, which has been the interpretation of the computer science community. But sometimes it's to probe or test intelligence. This paper argues we should think more about that application of question answering and its connection to the foundations of artificial intelligence: The Turing Test. We thus argue that in addition to the long-standing Cranfield paradigm popularized by information retrieval, this paper proposes an alternative "Manchester paradigm" closer to the Turing test, trivia games, and education.
@inproceedings{Gor:Webster:Boyd-Graber-2021, Title = {Toward Deconfounding the Influence of Subject's Demographic Characteristics in Question Answering}, Author = {Maharshi Gor and Kellie Webster and Jordan Boyd-Graber}, Booktitle = {Empirical Methods in Natural Language Processing}, Year = {2021}, Location = {Punta Cana}, Pages = {6}, Url = {http://umiacs.umd.edu/~jbg//docs/2021_emnlp_qa_fairness.pdf}, }
Accessible Abstract: The data used to train computer question answering systems have three times as many men as women. This paper examines whether this is a problem for question answering accuracy. After a thorough investigation, we do not find evidence of serious accuracy discrepancies between languages. However, an absence of evidence is not evidence of absence, and we would argue that we need more diverse datasets to better represent the world's population.
@inproceedings{Si:Zhao:Boyd-Graber-2021, Title = {What's in a Name? Answer Equivalence For Open-Domain Question Answering}, Author = {Chenglei Si and Chen Zhao and Jordan Boyd-Graber}, Booktitle = {Empirical Methods in Natural Language Processing}, Year = {2021}, Location = {Punta Cana}, Url = {http://umiacs.umd.edu/~jbg//docs/2021_emnlp_answer_equiv.pdf}, }
Accessible Abstract: Is Tim Cook the same person as Timothy Donald Cook? You might think so, but the way we train computers to answer questions would say they aren't. We show that keeping track of multiple names (and it's really simple) can create better question answering systems. Simply by adding alternate answers mined from knowledge bases, we can improve accuracy 1-2 points on major QA datasets.
@inproceedings{Si:Zhao:Boyd-Graber-2021, Title = {What's in a Name? Answer Equivalence For Open-Domain Question Answering}, Author = {Chenglei Si and Chen Zhao and Jordan Boyd-Graber}, Booktitle = {Empirical Methods in Natural Language Processing}, Year = {2021}, Location = {Punta Cana}, Url = {http://umiacs.umd.edu/~jbg//docs/2021_emnlp_answer_equiv.pdf}, }
Accessible Abstract: Is Tim Cook the same person as Timothy Donald Cook? You might think so, but the way we train computers to answer questions would say they aren't. We show that keeping track of multiple names (and it's really simple) can create better question answering systems. Simply by adding alternate answers mined from knowledge bases, we can improve accuracy 1-2 points on major QA datasets.
@inproceedings{Hoyle:Goel:Peskov:Hian-Cheong:Boyd-Graber:Resnik-2021, Title = {Is Automated Topic Model Evaluation Broken?: The Incoherence of Coherence}, Author = {Alexander Hoyle and Pranav Goel and Denis Peskov and Andrew Hian-Cheong and Jordan Boyd-Graber and Philip Resnik}, Booktitle = {Neural Information Processing Systems}, Location = {Online}, Year = {2021}, Url = {http://umiacs.umd.edu/~jbg//docs/2021_neurips_incoherence.pdf}, }
Accessible Abstract: Topic models help historians, journalists, and analysts make sense of large text collections. But how do you know if you have a good one? The field has settled on using "Automatic Coherence", but this paper argues that maybe that isn't the right choice if you want to actually make real users happy. This paper builds on our 2009 that showed perplexity was not a good evaluation of interpretability for topic models; while the field adopted automatic topic coherence as a result of that 2009 paper, this paper argues that automatic topic coherence is not a good metric for neural topic models (even though it worked for probabilistic topic models).
@inproceedings{Eisenschlos:Dhingra:Bulian:B\"orschinger:Boyd-Graber-2021, Title = {Fool Me Twice: Entailment from Wikipedia Gamification}, Author = {Julian Martin Eisenschlos and Bhuwan Dhingra and Jannis Bulian and Benjamin B\"orschinger and Jordan Boyd-Graber}, Booktitle = {North American Association for Computational Linguistics}, Year = {2021}, Url = {http://umiacs.umd.edu/~jbg//docs/2021_naacl_fm2.pdf}, }
Accessible Abstract: Democracy and the free press depends on being able to recognize when facts online are true or not. For machine learning to help this critical problem, it needs good data identifying which statements are backed up by trusted sources and which are not. This research creates a game people can play online to craft difficult claims that can train computers to spot disinformation online.
@inproceedings{Zhao:Xiong:Daume-III:Boyd-Graber-2021, Title = {Multi-Step Reasoning Over Unstructured Text with Beam Dense Retrieval}, Author = {Chen Zhao and Chenyan Xiong and Hal {Daum\'{e} III} and Jordan Boyd-Graber}, Booktitle = {North American Association for Computational Linguistics}, Year = {2021}, Url = {http://umiacs.umd.edu/~jbg//docs/2021_naacl_multi_ance.pdf}, }
Accessible Abstract: For computers to answer complicated questions online, they often need to put together multiple pieces of information (Ronald Reagan was both governor of California and an actor in Bedtime for Bonzo). However, existing approaches use the links in Wikipedia to combine these clues. This research helps computers find connected information without using these explicit links.
@inproceedings{Zhao:Xiong:Qian:Boyd-Graber-2020, Title = {Complex Factoid Question Answering with a Free-Text Knowledge Graph}, Author = {Chen Zhao and Chenyan Xiong and Xin Qian and Jordan Boyd-Graber}, Booktitle = {ACM International Conference on World Wide Web}, Year = {2020}, Location = {Taipei, Taiwan}, Url = {http://umiacs.umd.edu/~jbg//docs/2020_www_delft.pdf}, }
@inproceedings{Boyd-Graber:B\"orschinger-2020, Title = {What Question Answering can Learn from Trivia Nerds}, Author = {Jordan Boyd-Graber and Benjamin B\"orschinger}, Year = {2020}, Url = {http://umiacs.umd.edu/~jbg//docs/2020_acl_trivia.pdf}, Location = {The Cyberverse Simulacrum of Seattle}, Booktitle = {Association for Computational Linguistics}, }
Accessible Abstract: This paper reflects on the similarities between trivia competitions and computer question answering research. Modern machine learning requires large, quality datasets. The central thesis of this article argues that the same things that make trivia tournaments good (they're fun, fair, and consistently crown the best trivia players) can also improve question answering datasets. Concretely, we argue that question answering datasets should clearly specify what answers are requested, have systematic policies to deal with natural ambiguity and variation, have authors look at the data (and help others do the same), make sure questions separate the best from the rest, and ensure people can have fun. We draw on the authors' experience in the trivia community (including embarrassing episodes on Jeopardy!) to illustrate our arguments.
@article{Shi:Zhao:Boyd-Graber:Daume-III:Lee-2020, Title = {On the Potential of Lexico-logical Alignments for Semantic Parsing to SQL Queries}, Author = {Tianze Shi and Chen Zhao and Jordan Boyd-Graber and Hal {Daum\'{e} III} and Lillian Lee}, Journal = {Findings of EMNLP}, Year = {2020}, Url = {http://umiacs.umd.edu/~jbg//docs/2020_findings_qalign.pdf}, }
@article{Thomas:Jordan:Jannis:Massimiliano:Markus-2020, Title = {CLIMATE-FEVER: A Dataset for Verification of Real-World Climate Claims}, Author = {Diggelmann, Thomas and Boyd-Graber, Jordan and Bulian, Jannis and Ciaramita, Massimiliano and Leippold, Markus}, Booktitle = {NIPS Workshop on Tackling Climate Change with Machine Learning}, Year = {2020}, Url = {https://research.google/pubs/pub50541/}, }
@inproceedings{Wallace:Feng:Boyd-Graber-2019, Title = {Misleading Failures of Partial-input Baselines}, Author = {Eric Wallace and Shi Feng and Jordan Boyd-Graber}, Booktitle = {Association for Computational Linguistics}, Year = {2019}, Location = {Florence, Italy}, Url = {http://umiacs.umd.edu/~jbg//docs/2019_acl_flipside.pdf}, }
@inproceedings{Elgohary:Peskov:Boyd-Graber-2019, Title = {Can You Unpack That? Learning to Rewrite Questions-in-Context}, Author = {Ahmed Elgohary and Denis Peskov and Jordan Boyd-Graber}, Booktitle = {Empirical Methods in Natural Language Processing}, Year = {2019}, Location = {Hong Kong, China}, Url = {http://umiacs.umd.edu/~jbg//docs/2019_emnlp_sequentialqa.pdf}, }
@inproceedings{Feng:Boyd-Graber-2019, Title = {What AI can do for me: Evaluating Machine Learning Interpretations in Cooperative Play}, Author = {Shi Feng and Jordan Boyd-Graber}, Booktitle = {Intelligent User Interfaces}, Year = {2019}, Location = {Los Angeles, CA}, Url = {http://umiacs.umd.edu/~jbg//docs/2019_iui_augment.pdf}, }
@article{Wallace:Rodriguez:Feng:Yamada:Boyd-Graber-2019, Title = {Trick Me If You Can: Human-in-the-loop Generation of Adversarial Question Answering Examples}, Author = {Eric Wallace and Pedro Rodriguez and Shi Feng and Ikuya Yamada and Jordan Boyd-Graber}, Booktitle = {Transactions of the Association for Computational Linguistics}, Year = {2019}, Volume = {10}, Url = {http://umiacs.umd.edu/~jbg//docs/2019_tacl_trick.pdf}, }
@inproceedings{Wallace:Boyd-Graber-2018, Title = {Trick Me If You Can: Adversarial Writing of Trivia Challenge Questions}, Author = {Eric Wallace and Jordan Boyd-Graber}, Booktitle = {ACL Student Research Workshop}, Year = {2018}, Location = {Melbourne, Australia}, Url = {http://aclweb.org/anthology/P18-3018}, }
@inproceedings{Feng:Wallace:Boyd-Graber-2018, Title = {Interpreting Neural Networks with Nearest Neighbors}, Author = {Shi Feng and Eric Wallace and Jordan Boyd-Graber}, Booktitle = {EMNLP Workshop on BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP}, Year = {2018}, Location = {Brussels, Belgium}, Url = {http://aclweb.org/anthology/W18-5416}, }
@inproceedings{Elgohary:Zhao:Boyd-Graber-2018, Title = {Dataset and Baselines for Sequential Open-Domain Question Answering}, Author = {Ahmed Elgohary and Chen Zhao and Jordan Boyd-Graber}, Booktitle = {Empirical Methods in Natural Language Processing}, Year = {2018}, Location = {Brussels, Belgium}, Url = {http://umiacs.umd.edu/~jbg//docs/2018_emnlp_linked.pdf}, }
@inproceedings{Feng:Wallace:II:Rodriguez:Iyyer:Boyd-Graber-2018, Title = {Pathologies of Neural Models Make Interpretation Difficult}, Author = {Shi Feng and Eric Wallace and Alvin Grissom II and Pedro Rodriguez and Mohit Iyyer and Jordan Boyd-Graber}, Booktitle = {Empirical Methods in Natural Language Processing}, Year = {2018}, Location = {Brussels, Belgium}, Url = {http://umiacs.umd.edu/~jbg//docs/2018_emnlp_rs.pdf}, }
@inproceedings{Iyyer:Manjunatha:Boyd-Graber:Davis-2018, Title = {Learning to Color from Language}, Author = {Mohit Iyyer and Varun Manjunatha and Jordan Boyd-Graber and Larry Davis}, Booktitle = {North American Association for Computational Linguistics}, Year = {2018}, Url = {http://umiacs.umd.edu/~jbg//docs/2018_naacl_colorization.pdf}, }
@inbook{Boyd-Graber:Feng:Rodriguez-2018, Editor = {Sergio Escalera and Markus Weimer}, Title = {Human-Computer Question Answering: The Case for Quizbowl}, Author = {Jordan Boyd-Graber and Shi Feng and Pedro Rodriguez}, Booktitle = {The NIPS '17 Competition: Building Intelligent Systems}, Publisher = {Springer Verlag}, Year = {2018}, Url = {http://umiacs.umd.edu/~jbg//docs/2018_nips_qbcomp.pdf}, }
@online{Boyd-Graber:Srikanth-2021, Author = {Jordan Boyd-Graber and Neha Srikanth}, Year = {2021}, Title = {Student Computer Systems vs. Trivia and AI Experts}, Location = {Online}, Url = {http://users.umiacs.umd.edu/~jbg/teaching/CMSC_470/}, }
@online{Boyd-Graber:Min:Kwiatkowski-2020, Author = {Jordan Boyd-Graber and Sewon Min and Tom Kwiatkowski}, Year = {2020}, Title = {EfficientQA Human vs. Computer Competition}, Location = {Online}, Url = {https://sites.google.com/view/qanta/past-events/neurips-2020-efficient-qa}, }
@online{Boyd-Graber:Rodriguez:Goel-2019, Author = {Jordan Boyd-Graber and Pedro Rodriguez and Pranav Goel}, Year = {2019}, Title = {Four Jeopardy! champions competed against CMSC 470 Natural Language Processing Students' Question Answering Systems}, Location = {College Park, MD}, Url = {https://www.cs.umd.edu/article/2019/05/four-jeopardy-champions-competed-against-cmsc-470-natural-language-processing}, }
@article{Gardner:Ammar-2019, Author = {Matt Gardner and Waleed Ammar}, Year = {2019}, Title = {Pathologies of Neural Models Make Interpretation Difficult}, Journal = {AI2 NLP Highlights}, Url = {https://soundcloud.com/nlp-highlights/87-pathologies-of-neural-models-make-interpretation-difficult-with-shi-feng}, }
@online{Cutlip-2019, Author = {Kimbra Cutlip}, Journal = {Science Daily}, Year = {2019}, Title = {Seeing How Computers "Think" Helps Humans Stump Machines and Reveals Artificial Intelligence Weaknesses}, Url = {https://cmns.umd.edu/news-events/features/4470}, }
@article{Charrington-2019, Author = {Sam Charrington}, Year = {2019}, Title = {Pathologies of Neural Models and Interpretability with Alvin Grissom II}, Journal = {TWiML Talk}, Url = {https://twimlai.com/twiml-talk-229-pathologies-of-neural-models-and-interpretability-with-alvin-grissom-ii/}, }
@online{Brachfeld-2019, Author = {Melissa Brachfeld}, Journal = {UMIACS}, Year = {2019}, Title = {Boyd-Graber, Feng Present Paper on Machine Learning Interpretability at IUI 2019}, Url = {http://www.umiacs.umd.edu/about-us/news/boyd-graber-feng-present-paper-machine-learning-interpretability-iui-2019}, }
@article{Gardner:Ammar-2018, Author = {Matt Gardner and Waleed Ammar}, Year = {2018}, Title = {A Discussion of Question Answering}, Journal = {AI2 NLP Highlights}, Url = {https://soundcloud.com/nlp-highlights/72-the-anatomy-question-answering-task-with-jordan-boyd-graber}, }
@article{Wright-2018, Author = {Matt Early Wright}, Year = {2018}, Title = {Inside AI's "Black Box"}, Journal = {Maryland Today}, Url = {https://today.umd.edu/articles/inside-ais-black-box-eda1240f-b360-45ac-9ee3-5d266b739f76}, }
@article{Levin-2018, Author = {Sala Levin}, Year = {2018}, Title = {What Is ... a Research-Inspired Route to "Jeopardy!"?}, Journal = {Maryland Today}, Url = {https://today.umd.edu/articles/what-research-inspired-route-jeopardy-b2cfa4ae-4958-4b02-a574-e4b0517a464c}, }
@article{Singh-2018, Author = {Samir Singh}, Year = {2018}, Title = {Summary of Pathologies of Neural Models Make Interpretations Difficult}, Journal = {UCI NLP}, Url = {https://medium.com/uci-nlp/summary-pathologies-of-neural-models-make-interpretations-difficult-emnlp-2018-62280abe9df8}, }
@article{Adams-2018, Author = {Brandi Adams}, Year = {2018}, Title = {Associate Professor Jordan Boyd-Graber to appear on Jeopardy on September 26th 2018}, Journal = {UMD Computer Science}, Url = {http://www.cs.umd.edu/article/2018/09/associate-professor-jordan-boyd-graber-appear-jeopardy-september-26th-2018}, }
@online{Brachfeld-2018, Author = {Melissa Brachfeld}, Journal = {UMIACS}, Year = {2018}, Title = {Boyd-Graber Publishes Paper in PNAS that Assesses Scholarly Influence}, Url = {http://www.umiacs.umd.edu/about-us/news/boyd-graber-publishes-paper-pnas-assesses-scholarly-influence}, }
This work is supported by the National Science Foundation. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the researchers and do not necessarily reflect the views of the National Science Foundation.