Project funded by DARPA
PI: Jonathan May
The Friction for Accountability in Conversational Transactions (FACT) Artificial Intelligence Exploration (AIE) opportunity will explore human-AI dialogue-based methods that avoid over-trust through reflective reasoning (“friction”) that reveals implicit assumptions between dialogue partners, enabling accountable decision-making in complex environments. FACT aims to develop and evaluate human-AI conversation-shaping algorithms that 1) capture mutual assumptions, views, and intentions based on dialogue history, 2) auto-assess the consequences of potential actions and the level of accountability for responses, and 3) reveal implicit costs and assumptions to the user, prompting critical analysis, and proposing course changes as appropriate.
![]() |
Jordan Boyd-Graber Associate Professor, Computer Science (UMD) |
![]() |
Jonathan Kummerfeld PI, Sydney |
![]() |
Jonathan May PI, USC ISI |
<< back to top
@inproceedings{Gu:Wongkamjan:Kummerfeld:Peskoff:May:Boyd-Graber-2025, Title = {Personalized Help for Optimizing Low-Skilled Users' Strategy}, Author = {Feng Gu and Wichayaporn Wongkamjan and Jonathan K. Kummerfeld and Denis Peskoff and Jonathan May and Jordan Boyd-Graber}, Booktitle = {Annual Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics}, Year = {2025}, Location = {Albuquerque}, Url = {http://cs.umd.edu/~jbg//docs/2024_arr_chiron-advisor.pdf}, }
Accessible Abstract: AIs can beat humans in game environments; however, how helpful those agents are to human remains understudied. We augment CICERO, a natural language agent that demonstrates superhuman performance in Diplomacy, to generate both move and message advice based on player intentions. A dozen Diplomacy games with novice and experienced players, with varying advice settings, show that some of the generated advice is beneficial. It helps novices compete with experienced players and in some instances even surpass them. The mere presence of advice can be advantageous, even if players do not follow it.
@inproceedings{Boyd-Graber-2024, Title = {More Victories, Less Cooperation: Assessing Cicero’s Diplomacy Play}, Booktitle = {Association for Computational Linguistics}, Year = {2024}, Location = {Bangkok, Thailand}, Url = {http://cs.umd.edu/~jbg//docs/2024_acl_cicero.pdf}, }
Accessible Abstract: Meta's recent AI, Cicero, grabbed headlines by its ability to beat humans at the game of Diplomacy: notable because players of the game not just need to make the right moves but also need to negotiate with each other in natural language. This paper investigates why it wins so many games, measuring its ability to persuade and trick other players. While Cicero wins just about every game, this is because of superhuman strategy, not superhuman communication, suggesting there is still further room for developing Diplomacy-playing AIs.
@inproceedings{Si:Goyal:Wu:Zhao:Feng:III:Boyd-Graber-2024, Title = {Large Language Models Help Humans Verify Truthfulness---Except When They Are Convincingly Wrong}, Author = {Chenglei Si and Navita Goyal and Tongshuang Wu and Chen Zhao and Shi Feng and Hal Daum\'{e} {III} and Jordan Boyd-Graber}, Booktitle = {North American Association for Computational Linguistics}, Year = {2024}, Url = {http://cs.umd.edu/~jbg//docs/2024_naacl_convincingly.pdf}, }
Any opinions, findings, and conclusions or recommendations expressed in this material are those of the researchers and do not necessarily reflect the views of the sponsor.