An Exploration of Decision Making Models in the Face of Untrusted Data
Sarah Kinsey
Committee: Thanh Nguyen (chair), Daniel Lowd, Lei Jiao
Area Exam(May 2023)
Keywords: Artificial Intelligence, Adversarial Learning, Game Theory, Decision Focused Learning

In various domains ranging from security and conservation to public health and planning, an ever-increasing amount of artificial intelligence approaches are being deployed in the real world. With real world deployments come additional complexities, challenges, and vulnerabilities. This work examines these considerations from three broad directions: security games, data-based decision making, and adversarial learning. For security games, we're primarily concerned with scenarios involving deliberate deception, where one agent manipulates data to alter a strategy formed by its adversary. Similarly, we study data-based decision making, where data is used by a learning model to make some decision. This data provides a vector for attack, which could be taken advantage of by an adversary. To investigate the various threat models, we draw on adversarial learning research which studies how attacks can be carried out when such an opening exists, in addition to providing defences. Understanding these three areas will provide a comprehensive view of how decision making models can perform when the data upon which they rely is compromised, enabling further research to create more robust systems.