Skip Navigation

Professor Lowd Awarded NSF EAGER Grant to Fund Research in Machine Learning in Adversarial Settings

alt text

Assistant Professor Daniel Lowd was recently awarded an NSF EAGER grant, which will fund preliminary research on using machine learning in adversarial settings, such as spam filtering and malware detection. The goal is to better understand how attackers can evade machine learning systems and make these systems more robust to such attacks.

For example, spammers add and remove words from their email messages in order to bypass spam filters, and web spammers try to deceive search engines by creating "link farms" to make a web site seem more important. These attacks can drastically reduce the performance of traditional machine learning systems, which are built on the assumption that future data will resemble past data. In these adversarial domains, attackers quickly evolve to avoid detection. This project will develop new methods for analyzing the vulnerability of classifiers to adaptive adversaries and new game-theoretic learning algorithms that reduce that vulnerability by anticipating these adversaries.

The funds from this grant will help support Ph.D. student Ali Torkamani, who has been working with Dr. Lowd in this area for the past three years. This collaboration has already led to publications at the International Conference on Machine Learning, one of the top conferences in machine learning, along with presentations at numerous workshops. Ph.D. student Brent Lessley and master's student Mino de Raj are also working with Dr. Lowd on related topics.