Title

Large-Scale Supervised Similarity Learning In Networks

Keywords

Large-scale network; Link content consistency; Supervised matrix factorization; Supervised network embedding; Supervised network similarity learning

Abstract

The problem of similarity learning is relevant to many data mining applications, such as recommender systems, classification, and retrieval. This problem is particularly challenging in the context of networks, which contain different aspects such as the topological structure, content, and user supervision. These different aspects need to be combined effectively, in order to create a holistic similarity function. In particular, while most similarity learning methods in networks such as SimRank utilize the topological structure, the user supervision and content are rarely considered. In this paper, a factorized similarity learning (FSL) is proposed to integrate the link, node content, and user supervision into a uniform framework. This is learned by using matrix factorization, and the final similarities are approximated by the span of low-rank matrices. The proposed framework is further extended to a noise-tolerant version by adopting a hinge loss alternatively. To facilitate efficient computation on large-scale data, a parallel extension is developed. Experiments are conducted on the DBLP and CoRA data sets. The results show that FSL is robust and efficient and outperforms the state of the art. The code for the learning algorithm used in our experiments is available at http://www.ifp.illinois.edu/~chang87/.

Publication Date

9-1-2016

Publication Title

Knowledge and Information Systems

Volume

48

Issue

3

Number of Pages

707-740

Document Type

Article

Personal Identifier

scopus

DOI Link

https://doi.org/10.1007/s10115-015-0894-8

Socpus ID

84944916452 (Scopus)

Source API URL

https://api.elsevier.com/content/abstract/scopus_id/84944916452

This document is currently not available here.

Share

COinS