Project funded by DARPA (ASED)
PI: Jordan Boyd-Graber
The Active Social Engineering Defense (ASED) program aims to develop the core technology to enable the capability to automatically elicit information from a malicious adversary in order to identify, disrupt, and investigate social engineering attacks. If successful, the ASED technology will do this by mediating communications between users and potential attackers, actively detecting attacks and coordinating investigations to discover the identity of the attacker.
The UMD team is working to build datasets that are annotated for deception in online conversations.
Jordan Boyd-Graber Associate Professor, Computer Science (UMD) | |
Ahmed Elgohary PhD Student, Computer Science (UMD) | |
Shi Feng PhD Student, Computer Science (UMD) | |
Denis Peskov PhD Student, Computer Science (UMD) |
<< back to top
@inproceedings{Peskov:Cheng:Elgohary:Barrow:Danescu-Niculescu-Mizil:Boyd-Graber-2020, Title = {It Takes Two to Lie: One to Lie and One to Listen}, Author = {Denis Peskov and Benny Cheng and Ahmed Elgohary and Joe Barrow and Cristian Danescu-Niculescu-Mizil and Jordan Boyd-Graber}, Booktitle = {Association for Computational Linguistics}, Year = {2020}, Location = {The Cyberverse Simulacrum of Seattle}, Url = {http://umiacs.umd.edu/~jbg//docs/2020_acl_diplomacy.pdf}, }
Accessible Abstract: Machine learning techniques to detect deception in online communications requires training and evaluation data. However, there is a dearth of data either because of uncertain gold labels or privacy concerns; we create a new, large deception-centered dataset in the online game of Diplomacy. We gathered 17,289 messages from 12 games (each of which took over a month) involving 84 players, the majority of which were unique users. This data was collected with a custom-made bot that allowed us to collect messages and annotations. The user pool was created from scratch: we varied participant demographics across gender, age, nationality, and past game experience. Some of our participants included the former president of the Diplomacy players' association, several top ranked players in the world, a board game shop owner, and scientists. We create machine learning models to detect lies using linguistic, context, and power-dynamic features. Our best model had similar lie detection accuracy to humans.
Any opinions, findings, and conclusions or recommendations expressed in this material are those of the researchers and do not necessarily reflect the views of the sponsor.