Graph Representation Learning and Graph Classification
Sara Riazi
Committee: Boyana Norris (chair), Steve Fickas, Dejing Dou
Area Exam(Jun 2017)
Keywords: Network Embedding, Graph Representation, Graph Classification

Many real-world problems are represented by using graphs. For example, given a graph of a chemical compound, we want do determine whether it causes a gene mutation or not. As another example, given a graph of a social network, we want to predict a potential friendship that does not exist but it is likely to appear soon. Many of these questions can be answered by using machine learning methods if we have vector representations of the inputs, which are either graphs or vertices, depending on the problem.

A general approach to extracting such vectors is to learn a latent vector representation for the vertices or the entire graph such that these vectors can be used in machine learning tasks, such as training a classifier or a predictive model. The learned vectors should reflect the graph's structure and its attributes, including vertex and edge attributes. For example, two adjacent vertices in a graph should have similar vector representations.

The problem of learning a latent vector representation for graphs or vertices is called graph representation learning or graph embedding. In this document, we mainly focus on recent developments in graph representation learning in different settings and its connection to various problems, such as graph classification or graph clustering.