Skip Navigation

Colloquium Details

Faculty Search Colloquium: Markov Logic: Representation, Inference and Learning

Author:Daniel Lowd University of Washington
Date:April 30, 2009
Time:10:30
Location:220 Deschutes
Host:Michal Young

Abstract

Many applications of AI, including natural language processing, information extraction, bioinformatics, robot mapping, and social network analysis, have both relational and statistical aspects. Historically, this has led to a divide between relational approaches based on first-order logic and statistical approaches based on probabilistic graphical models. Markov logic unifies the two by attaching weights to formulas in first-order logic, which are used as templates for constructing a Markov network.

In this talk, I will describe recent advances in Markov logic representation, algorithms, and applications. In particular, I will present my work on recursive Markov logic, which gives probabilistic models the full recursive capabilities of first-order logic. I will also show how faster and more efficient weight learning algorithms can be obtained by adapting ideas from convex optimization. Finally, I will discuss current work on combining learning with inference to make exact inference tractable even in very complex models such as Markov logic networks. I will illustrate these developments with applications to probabilistic databases, entity resolution, Web mining, and others.

Biography

Daniel Lowd is a Ph.D. candidate in the Department of Computer Science and Engineering at the University of Washington. His research covers a range of topics in statistical machine learning, including statistical relational representations, unifying learning and inference, and adversarial machine learning scenarios (e.g., spam filtering). His book on Markov logic, coauthored with Pedro Domingos, will be published this summer by Morgan & Claypool. He is also the recipient of graduate research fellowships from the National Science Foundation and Microsoft Research.