Many relational datasets, including relational databases, feature links of different types (e.g., actors act in movies, users rate movies), known as multi-relational, heterogeneous, or multi-layer graphs. Edge types/graph layers are often incompletely labeled. For example, IMDb lists Tom Cruise as a cast member of Mission Impossible, but not as its star. Inferring latent layers is useful for relational prediction tasks (e.g. predict Tom Cruise’s salary or his presence in other movies). This paper describes a Latent Layer Generative Framework - LLGF that extends graph encoder-decoder architectures to include latent layers. The decoder treats the observed edge type signal
as a linear combination of latent layers. The encoder infers parallel node representations, one for each latent layer. We evaluate our proposed framework on six benchmark graph learning datasets. Qualitative evidence indicates that LLGF recovers ground truth layers well. For link prediction as a downstream task, we find that extending Variational Graph Auto-Encoders with LLGF increases link prediction accuracy compared to state-of-theart graph Variational Auto-Encoders (up to 6% AUC depending on the dataset).
Article ID: 2022L17
Month: May
Year: 2022
Address: Online
Venue: Canadian Conference on Artificial Intelligence
Publisher: Canadian Artificial Intelligence Association
URL: https://caiac.pubpub.org/pub/o83p4avx