High Dimensional Low Rank Plus Sparse Matrix Decomposition

Keywords

big data; column sampling; Low-rank matrix; matrix decomposition; Sketching; subspace learning

Abstract

This paper is concerned with the problem of low-rank plus sparse matrix decomposition for big data. Conventional algorithms for matrix decomposition use the entire data to extract the low-rank and sparse components, and are based on optimization problems with complexity that scales with the dimension of the data, which limits their scalability. Furthermore, existing randomized approaches mostly rely on uniform random sampling, which is quite inefficient for many real world data matrices that exhibit additional structures (e.g., clustering). In this paper, a scalable subspace-pursuit approach that transforms the decomposition problem to a subspace learning problem is proposed. The decomposition is carried out by using a small data sketch formed from sampled columns/rows. Even when the data are sampled uniformly at random, it is shown that the sufficient number of sampled columns/rows is roughly O (r μ), where μ is the coherency parameter and $r$ is the rank of the low-rank component. In addition, adaptive sampling algorithms are proposed to address the problem of columns/rows sampling from structured data. We provide an analysis of the proposed method with adaptive sampling and show that adaptive sampling makes the required number of sampled columns/rows invariant to the distribution of the data. The proposed approach is amenable to online implementation and an online scheme is proposed.

Publication Date

4-15-2017

Publication Title

IEEE Transactions on Signal Processing

Volume

65

Issue

8

Number of Pages

2004-2019

Document Type

Article

Personal Identifier

scopus

DOI Link

https://doi.org/10.1109/TSP.2017.2649482

Socpus ID

85014911170 (Scopus)

Source API URL

https://api.elsevier.com/content/abstract/scopus_id/85014911170

This document is currently not available here.

Share

COinS