Abstract: We consider the high-dimensional sparse linear regression problem of accurately estimating a sparse vector using a small number of linear measurements that are contaminated by noise. It is well known that the standard cadre of computationally tractable sparse regression algorithms—such as the Lasso, Orthogonal Matching Pursuit (OMP), and their extensions—perform poorly when the measurement matrix contains highly correlated columns. To address this shortcoming, we develop a simple greedy algorithm, called SWAP, which iteratively swaps variables until convergence. SWAP is surprisingly effective in handling measurement matrices with high correlations. In fact, we prove that SWAP outputs the true support, the locations of the non-zero entries in the sparse vector, under a relatively mild condition on the measurement matrix. Furthermore, we show that SWAP can be used to boost the performance of any sparse regression algorithm. We empirically demonstrate the advantages of SWAP by comparing it with several state-of-the-art sparse regression algorithms.
The above example illustrates the advantages of using SWAP for regression with correlated measurements (see Figure 3 in http://dsp.rice.edu/publications/swap-journal). The x-axis corresponds to the amount of correlations in the measurement matrix and the y-axis corresponds to the mean true positive rate (TPR), i.e., the fraction of the true support. The dashed lines correspond to traditional algorithms while the solid lines correspond to SWAP based algorithms. We clearly see that SWAP is able to boost the performance of traditional algorithms. In particular, as the correlations become large, SWAP is able to infer a larger fraction of the variables in the true support.