C. A. Metzler, A. Maleki, and R. G. Baraniuk, “From Denoising to Compressed Sensing,” July 2014. arXiv version
Abstract: A denoising algorithm seeks to remove perturbations or errors from a signal. The last three decades have seen extensive research devoted to this arena, and as a result, today’s denoisers are highly optimized algorithms that effectively remove large amounts of additive white Gaussian noise. A compressive sensing (CS) reconstruction algorithm seeks to recover a structured signal acquired using a small number of randomized measurements. Typical CS reconstruction algorithms can be cast as iteratively estimating a signal from a perturbed observation. This paper answers a natural question: How can one effectively employ a generic denoiser in a CS reconstruction algorithm? In response, in this paper, we develop a denoising-based approximate message passing (D-AMP) algorithm that is capable of high-performance reconstruction. We demonstrate that, for an appropriate choice of denoiser, D-AMP offers state-of-the-art CS recovery performance for natural images. We explain the exceptional performance of D-AMP by analyzing some of its theoretical features. A critical insight in our approach is the use of an appropriate Onsager correction term in the D-AMP iterations, which coerces the signal perturbation at each iteration to be very close to the white Gaussian noise that denoisers are typically designed to remove.
The figure below illustrates reconstructions of the 256×256 Barbara test image (65536 pixels) from 6554 randomized measurements. Exploiting the state-of-the-art BM3D denoising algorithm in D-AMP enables state-of-the-art CS recovery.
Rice University-based nonprofit OpenStax, which has already provided free textbooks to hundreds of thousands of college students, today announced a $9 million effort supported by the Laura and John Arnold Foundation to develop free, digital textbooks capable of delivering personalized lessons to high school students.
“Using advanced machine learning algorithms and new models from cognitive science, we can improve educational outcomes in a number of ways,” said project founder Richard Baraniuk. “We can help teachers and administrators by tapping into metrics that they already collect — like which kind of homework and test questions a student tends to get correct or incorrect — as well as things that only the book would notice — like which examples a student clicks on, how long she stays on a particular illustration or which sections she goes back to reread.”
The technology will pinpoint areas where students need more assistance, and it will react by delivering specific content to reinforce concepts in those areas. The personalized books will deliver tailored lessons that allow individual students to learn at their own pace. For fast learners, lessons might be streamlined and compact; for a struggling student, lessons might include supplemental material and additional learning exercises.
Richard Baraniuk was one of 3,215 researchers in the sciences and social sciences who authored papers that ranked among the top 1% most cited for their subject field and year of publication in Thomson Reuters’ academic citation indexing and search service, Web of Knowledge.
More info available here and here.
Rice University-based publisher OpenStax College today announced a distribution partnership with NACSCORP, a subsidiary of the National Association of College Stores (NACS), that will allow the nonprofit publisher to drop prices on all its print textbooks and distribute them to more than 3,000 college stores.
Chronicle of Higher Education
Removing ‘barriers’ to education
through free college textbooks
By Emanuella Grinberg, CNN
April 18, 2014
Link to article
D. Vats and R. G. Baraniuk, “Swapping Variables for High-Dimensional Sparse Regression with Correlated Measurements,” NIPS 2013, journal preprint 2014.
Abstract: We consider the high-dimensional sparse linear regression problem of accurately estimating a sparse vector using a small number of linear measurements that are contaminated by noise. It is well known that the standard cadre of computationally tractable sparse regression algorithms—such as the Lasso, Orthogonal Matching Pursuit (OMP), and their extensions—perform poorly when the measurement matrix contains highly correlated columns. To address this shortcoming, we develop a simple greedy algorithm, called SWAP, which iteratively swaps variables until convergence. SWAP is surprisingly effective in handling measurement matrices with high correlations. In fact, we prove that SWAP outputs the true support, the locations of the non-zero entries in the sparse vector, under a relatively mild condition on the measurement matrix. Furthermore, we show that SWAP can be used to boost the performance of any sparse regression algorithm. We empirically demonstrate the advantages of SWAP by comparing it with several state-of-the-art sparse regression algorithms.
The above example illustrates the advantages of using SWAP for regression with correlated measurements (see Figure 3 in http://dsp.rice.edu/publications/swap-journal). The x-axis corresponds to the amount of correlations in the measurement matrix and the y-axis corresponds to the mean true positive rate (TPR), i.e., the fraction of the true support. The dashed lines correspond to traditional algorithms while the solid lines correspond to SWAP based algorithms. We clearly see that SWAP is able to boost the performance of traditional algorithms. In particular, as the correlations become large, SWAP is able to infer a larger fraction of the variables in the true support.
D. Vats and R. G. Baraniuk, “Path Thresholding: Asymptotically Tuning-Free High-Dimensional Sparse Regression,” in Proceedings of the 17th International Conference on Artificial Intelligence and Statistics (AISTATS), 2014
Abstract: In this paper, we address the challenging problem of selecting tuning parameters for high-dimensional sparse regression. We propose a simple and computationally efficient method, called path thresholding (PaTh), which transforms any tuning parameter-dependent sparse regression algorithm into an asymptotically tuning-free sparse regression algorithm. More specifically, we prove that, as the problem size becomes large (in the number of variables and in the number of observations), PaTh performs accurate sparse regression, under appropriate conditions, without specifying a tuning parameter. In finite-dimensional settings, we demonstrate that PaTh can alleviate the computational burden of model selection algorithms by significantly reducing the search space of tuning parameters.
The above example illustrates the advantages of using PaTh on real data. The first figure applies the forward-backward (FoBa) sparse regression algorithm to the UCI crime data. The horizontal axis speciﬁes the sparsity level and the vertical axis speciﬁes the coeﬃcient values. The second ﬁgure applies PaTh to the solution path in the ﬁrst ﬁgure. PaTh reduces the total number of solutions from 50 (in the first figure) to 4 (in the second figure). We observe similar trends for the gene data (last two ﬁgures).
A. C. Butler, E. J. Marsh, J. P. Slavinsky, and R. G. Baraniuk, “Integrating Cognitive Science and Technology Improves Learning in a STEM Classroom,” Educational Psychology Review, March 2014.
Preprint version of the paper
Abstract: The most effective educational interventions often face significant barriers to widespread implementation because they are highly specific, resource-intense, and/or require comprehensive reform. We argue for an alternative approach to improving education: leveraging technology and cognitive science to develop interventions that generalize, scale, and can be easily implemented within any curriculum. In a classroom experiment, we investigated whether three simple, but powerful principles from cognitive science could be combined to improve learning. Although implementing these principles only required a few small changes to standard practice in a college engineering course, it significantly increased student performance on exams. Our findings highlight the potential for developing inexpensive, yet effective educational interventions that can be implemented worldwide.
T. Goldstein, L. Xu, K. F. Kelly, and R. G. Baraniuk, “The STOne Transform: Multi-Resolution Image Enhancement and Real-Time Compressive Video,” 2013.
Abstract: Compressive sensing enables the reconstruction of high-resolution signals from under-sampled data. While compressive methods simplify data acquisition, they require the solution of difficult recovery problems to make use of the resulting measurements. This article presents a new sensing framework that combines the advantages of both conventional and compressive sensing. Using the proposed STOne transform, measurements can be reconstructed instantly at Nyquist rates at any power-of-two resolution. The same data can then be “enhanced” to higher resolutions using compressive methods that leverage sparsity to “beat” the Nyquist limit. The availability of a fast direct reconstruction enables compressive measurements to be processed on small embedded devices. We demonstrate this by constructing a real-time compressive video camera.
(a) (b) (c) (d)
The above example demonstrates reconstruction of high speed video from under-sampled measurements. (a) 256×256 image frame from a video acquired at full resolution. (b) 64×64 image frame directly reconstructed from STOne measurements at a rate 6.25% of the full-rate measurements. (c) 256×256 image frame recovered from STOne measurements at a rate 5% of the full-rate measurements. (d) 256×256 image frame recovered from STOne measurements at a rate 1% of the full-rate measurements.
THE SCIENCE OF LEARNING: Bridging the Laboratory-Classroom Divide
Revitalizing education at all levels and in all subject areas is a major priority in the United States. To properly educate the leaders of tomorrow, we must move beyond the centuries-old, ingrained paradigm of education that views the process of learning as “one-way street” in which knowledge is transmitted from teacher to learner via paper textbooks and lectures. Instead, we must provide learners with tools to effectively engage in self-regulated learning outside the classroom.
Despite the promise and some early successes in computer-based personalized learning, many important issues and challenges remain to be surmounted before personalized learning reaches the mainstream. The goal of this annual workshop is to bring together the intellectual leaders of this new movement in order to exchange ideas, network, and plot a course to the future.
This year’s workshop will focus on how knowledge that has emerged from the science of learning can inform the development of personalized learning systems. Machine learning algorithms and “big data” have the potential to revolutionize learning, but their application should be based on basic research findings from cognitive science, psychology, and education. There is a pressing need to explore how research findings from the laboratory can be applied to facilitate learning in dynamic and complicated educational environments. The workshop will feature leaders in the basic research on the science of learning who will discuss both their recent findings and the potential implications for personalized learning.
The scope of the workshop encompasses PK-12 through college and lifelong learning. While primarily an in-person event, the lectures will also be webcast and archived for later viewing.
- Michael Mozer, University of Colorado-Boulder
- Kurt VanLehn, Arizona State University
- Jeffrey Karpicke, Purdue University
- Mark McDaniel, Washington University-St. Louis
- Hal Pashler, University of California – San Diego
- Rice University Office of the President,
- Rice University Office of the Provost
- George R. Brown School of Engineering
- Ken Kennedy Institute
- Rice Center for Digital Learning and Scholarship.
- Richard Baraniuk, C. Sidney Burrus, Rice University
- Elizabeth Marsh, Andrew Butler, Duke University