Many real-world data sets are sparse or almost sparse. One method to measure this for a matrix $A\in \mathbb{R}^{n\times n}$ is the \emph{numerical sparsity}, denoted $\mathsf{ns}(A)$, defined as the minimum $k\geq 1$ such that $\|a\|_1/\|a\|_2 \leq \sqrt{k}$ for every row and every column $a$ of $A$. This measure of $a$ is smooth and is clearly only smaller than the number of non-zeros in the row/column $a$. The seminal work of Achlioptas and McSherry [2007] has put forward the question of approximating an input matrix $A$ by entrywise sampling. More precisely, the goal is to quickly compute a sparse matrix $\tilde{A}$ satisfying $\|A - \tilde{A}\|_2 \leq \epsilon \|A\|_2$ (i.e., additive spectral approximation) given an error parameter $\epsilon>0$. The known schemes sample and rescale a small fraction of entries from $A$. We propose a scheme that sparsifies an almost-sparse matrix $A$ -- it produces a matrix $\tilde{A}$ with $O(\epsilon^{-2}\mathsf{ns}(A) \cdot n\ln n)$ non-zero entries with high probability. We also prove that this upper bound on $\mathsf{nnz}(\tilde{A})$ is \emph{tight} up to logarithmic factors. Moreover, our upper bound improves when the spectrum of $A$ decays quickly (roughly replacing $n$ with the stable rank of $A$). Our scheme can be implemented in time $O(\mathsf{nnz}(A))$ when $\|A\|_2$ is given. Previously, a similar upper bound was obtained by Achlioptas et. al [2013] but only for a restricted class of inputs that does not even include symmetric or covariance matrices. Finally, we demonstrate two applications of these sampling techniques, to faster approximate matrix multiplication, and to ridge regression by using sparse preconditioners.
翻译:许多真实世界数据集是稀疏或几乎稀少的。 对于 $A\ in\ mathb{R\n\timen} n} 的基数, 这个方法可以测量 $A\ in\ mathbr} 美元, 表示$mathsfsf{ns} (A) 美元, 定义为$k\geq 1美元的最低值, 这样每行和每列美元美元。 美元的基数是平滑的, 显然比行/ 库内非零的基数要小一点。 Alioptas和 McSherry[2007] 的基数工作提出了一个问题, 美元(a) /\ 1/\\\\\\\\ leq\ k} 美元, 更精确地, 目标是快速地将一个稀薄的基数 $A -\\\\\\\ 美元 美元 美元(a\\\ lax) 的基数(a. a laxal) a massal deal a prodeal a mess a mess a mess a $.