Skip Navigation

CIS Professor Daniel Lowd Receives Young Investigator Award for Work on Adversarial Machine Learning

alt text

Assistant Professor Daniel Lowd has recently received the Army Research Office's (ARO) Young Investigator Award for his proposal on "Inferring Trustworthiness and Deceit in Adversarial Relational Models". This award will fund Lowd's project for $360,000 over three years.

This project will develop methods for detecting malicious behavior such as social network spam and fake reviews. To better distinguish honest from dishonest behavior, these methods will exploit relational structure. For example, social network spammers are more likely to be friends with other spammers, and users who have posted one fake review tend to post several more. Thus, evidence of malicious behavior can be used to find more malicious behavior elsewhere in the network. These problems are adversarial as well. For example, spammers and fraudsters constantly adapt their behavior to avoid getting caught. As part of this project, Lowd will use techniques from game theory to make machine learning models harder to evade.

In previous work, Lowd helped pioneer the area of adversarial machine learning, demonstrating that spam filters and similar systems are vulnerable to attacks, even when the filter's parameters are kept secret. More recently, Lowd and his students developed methods for robustly classifying interrelated entities, such as web pages or social network profiles. Lowd's next project will extend and improve on these methods and apply them to real-world problems.

The ARO Young Investigator Program seeks to attract outstanding young university faculty members to Army research and support their research and teaching careers.