NIPS Workshop on "Human Propelled Machine Learning"

In typical applications of machine learning (ML), humans typically enter the process at an early stage, in determining an initial representation of the problem and in preparing the data, and at a late stage, in interpreting and making decisions based on the results. Consequently, the bulk of the ML literature deals with such situations. Much less research has been devoted to ML involving “humans-in-the-loop,” where humans play a more intrinsic role in the process, interacting with the ML system to iterate towards a solution to which both humans and machines have contributed. In these situations, the goal is to optimize some quantity that can be obtained only by evaluating human responses and judgments. Examples of this hybrid, “human-in-the-loop” ML approach include:

  • ML-based education, where a scheduling system acquires information about learners with the goal of selecting and recommending optimal lessons;
  • Adaptive testing in psychological surveys, educational assessments, and recommender systems, where the system acquires testees’ responses and selects the next item in an adaptive and automated manner;
  • Interactive topic modeling, where human interpretations of the topics are used to iteratively refine an estimated model;
  • Image classification, where human judgments can be leveraged to improve the quality and information content of image features or classifiers.

In this workshop in December 2014, we focused on the emerging new theories, algorithms, and applications of human-in-the-loop ML algorithms.

Workshop web page with speakers and their slides

More information about the NIPS conference

Posted in Uncategorized | Comments Off on NIPS Workshop on "Human Propelled Machine Learning"

10 More OpenStax College Titles On Their Way

Rice University-based publisher OpenStax College today announced $9.5 million in philanthropic grants from the Laura and John Arnold Foundation (LJAF), Rice alumni John and Ann Doerr, and the William and Flora Hewlett Foundation to add 10 titles to its catalog of free, high-quality textbooks for the nation’s most-attended college courses by 2017.  OpenStax College is creating free books for 25 of the most-attended college courses in the country.

OpenStax College uses philanthropic gifts to produce high-quality, peer-reviewed textbooks that are free online and low-cost in print. Its first seven books have already saved students more than $13 million. The books have been downloaded more than 650,000 times and have been adopted for use in nearly 900 courses at community colleges, four-year colleges, universities and high schools.  OpenStax College has four titles in production for next year and plans to expand its library to 21 titles by 2017.  The additional funding will allow the nonprofit publisher to develop textbooks for additional high-enrollment courses, including several science and mathematics courses.

“Our books are opening access to higher education for students who couldn’t otherwise afford it,” said Rice Professor Richard Baraniuk, founder and director of OpenStax College. “We’ve already saved students millions of dollars, and thanks to the generosity of our philanthropic partners, we hope to save students more than $500 million by 2020.”

Read more:

Posted in Uncategorized | Comments Off on 10 More OpenStax College Titles On Their Way

From Denoising to Compressed Sensing

C. A. Metzler, A. Maleki, and R. G. Baraniuk, “From Denoising to Compressed Sensing,” July 2014.  arXiv version

Abstract:  A denoising algorithm seeks to remove perturbations or errors from a signal. The last three decades have seen extensive research devoted to this arena, and as a result, today’s denoisers are highly optimized algorithms that effectively remove large amounts of additive white Gaussian noise. A compressive sensing (CS) reconstruction algorithm seeks to recover a structured signal acquired using a small number of randomized measurements. Typical CS reconstruction algorithms can be cast as iteratively estimating a signal from a perturbed observation. This paper answers a natural question: How can one effectively employ a generic denoiser in a CS reconstruction algorithm? In response, in this paper, we develop a denoising-based approximate message passing (D-AMP) algorithm that is capable of high-performance reconstruction. We demonstrate that, for an appropriate choice of denoiser, D-AMP offers state-of-the-art CS recovery performance for natural images. We explain the exceptional performance of D-AMP by analyzing some of its theoretical features. A critical insight in our approach is the use of an appropriate Onsager correction term in the D-AMP iterations, which coerces the signal perturbation at each iteration to be very close to the white Gaussian noise that denoisers are typically designed to remove.

The figure below illustrates reconstructions of the 256×256 Barbara test image (65536 pixels) from 6554 randomized measurements.  Exploiting the state-of-the-art BM3D denoising algorithm in D-AMP enables state-of-the-art CS recovery.

Posted in Uncategorized | Comments Off on From Denoising to Compressed Sensing

Adaptive Textbook Project Launched

Rice University-based nonprofit OpenStax, which has already provided free textbooks to hundreds of thousands of college students, today announced a $9 million effort supported by the Laura and John Arnold Foundation to develop free, digital textbooks capable of delivering personalized lessons to high school students.

“Using advanced machine learning algorithms and new models from cognitive science, we can improve educational outcomes in a number of ways,” said project founder Richard Baraniuk. “We can help teachers and administrators by tapping into metrics that they already collect — like which kind of homework and test questions a student tends to get correct or incorrect — as well as things that only the book would notice — like which examples a student clicks on, how long she stays on a particular illustration or which sections she goes back to reread.”

The technology will pinpoint areas where students need more assistance, and it will react by delivering specific content to reinforce concepts in those areas. The personalized books will deliver tailored lessons that allow individual students to learn at their own pace. For fast learners, lessons might be streamlined and compact; for a struggling student, lessons might include supplemental material and additional learning exercises.

Read more:

Posted in Uncategorized | Comments Off on Adaptive Textbook Project Launched

Thomson Reuters’ Highly Cited Researcher for 2014

Richard Baraniuk was one of 3,215 researchers in the sciences and social sciences who authored papers that ranked among the top 1% most cited for their subject field and year of publication in Thomson Reuters’ academic citation indexing and search service, Web of Knowledge.
More info available here and here.

Posted in Uncategorized | Comments Off on Thomson Reuters’ Highly Cited Researcher for 2014

Free Texbooks at Lower Prices

Rice University-based publisher OpenStax College today announced a distribution partnership with NACSCORP, a subsidiary of the National Association of College Stores (NACS), that will allow the nonprofit publisher to drop prices on all its print textbooks and distribute them to more than 3,000 college stores.


Pres
s release
Chronicle of Higher Education

Posted in Uncategorized | Comments Off on Free Texbooks at Lower Prices

Removing ‘Barriers’ to Education through Free College Textbooks

Removing ‘barriers’ to education
through free college textbooks

By Emanuella Grinberg, CNN
April 18, 2014
Link to article

Posted in Uncategorized | Comments Off on Removing ‘Barriers’ to Education through Free College Textbooks

When in Doubt, SWAP!

D. Vats and R. G. Baraniuk, “Swapping Variables for High-Dimensional Sparse Regression with Correlated Measurements,” NIPS 2013, journal preprint 2014.

Abstract: We consider the high-dimensional sparse linear regression problem of accurately estimating a sparse vector using a small number of linear measurements that are contaminated by noise. It is well known that the standard cadre of computationally tractable sparse regression algorithms—such as the Lasso, Orthogonal Matching Pursuit (OMP), and their extensions—perform poorly when the measurement matrix contains highly correlated columns. To address this shortcoming, we develop a simple greedy algorithm, called SWAP, which iteratively swaps variables until convergence. SWAP is surprisingly effective in handling measurement matrices with high correlations. In fact, we prove that SWAP outputs the true support, the locations of the non-zero entries in the sparse vector, under a relatively mild condition on the measurement matrix. Furthermore, we show that SWAP can be used to boost the performance of any sparse regression algorithm. We empirically demonstrate the advantages of SWAP by comparing it with several state-of-the-art sparse regression algorithms.


The above example illustrates the advantages of using SWAP for regression with correlated measurements (see Figure 3 in http://dsp.rice.edu/publications/swap-journal).  The x-axis corresponds to the amount of correlations in the measurement matrix and the y-axis corresponds to the mean true positive rate (TPR), i.e., the fraction of the true support.  The dashed lines correspond to traditional algorithms while the solid lines correspond to SWAP based algorithms.  We clearly see that SWAP is able to boost the performance of traditional algorithms.  In particular, as the correlations become large, SWAP is able to infer a larger fraction of the variables in the true support.

Software: http://dsp.rice.edu/software/swap

Posted in Uncategorized | Comments Off on When in Doubt, SWAP!

On the PaTh to Greatness

D. Vats and R. G. Baraniuk, “Path Thresholding: Asymptotically Tuning-Free High-Dimensional Sparse Regression,” in Proceedings of the 17th International Conference on Artificial Intelligence and Statistics (AISTATS), 2014

Abstract: In this paper, we address the challenging problem of selecting tuning parameters for high-dimensional sparse regression. We propose a simple and computationally efficient method, called path thresholding (PaTh), which transforms any tuning parameter-dependent sparse regression algorithm into an asymptotically tuning-free sparse regression algorithm. More specifically, we prove that, as the problem size becomes large (in the number of variables and in the number of observations), PaTh performs accurate sparse regression, under appropriate conditions, without specifying a tuning parameter. In finite-dimensional settings, we demonstrate that PaTh can alleviate the computational burden of model selection algorithms by significantly reducing the search space of tuning parameters.


The above example illustrates the advantages of using PaTh on real data.  The first figure applies the forward-backward (FoBa) sparse regression algorithm to the UCI crime data.  The horizontal axis specifies the sparsity level and the vertical axis specifies the coefficient values.  The second figure applies PaTh to the solution path in the first figure.  PaTh reduces the total number of solutions from 50 (in the first figure) to 4 (in the second figure). We observe similar trends for the gene data (last two figures).

Software: http://dsp.rice.edu/software/path

Posted in Uncategorized | Comments Off on On the PaTh to Greatness

Improved STEM Learning with Cognitive Science

A. C. Butler, E. J. Marsh, J. P. Slavinsky, and R. G. Baraniuk, “Integrating Cognitive Science and Technology Improves Learning in a STEM Classroom,” Educational Psychology Review, March 2014.

Preprint version of the paper
Press release

Abstract:  The most effective educational interventions often face significant barriers to widespread implementation because they are highly specific, resource-intense, and/or require comprehensive reform.  We argue for an alternative approach to improving education: leveraging technology and cognitive science to develop interventions that generalize, scale, and can be easily implemented within any curriculum. In a classroom experiment, we investigated whether three simple, but powerful principles from cognitive science could be combined to improve learning.  Although implementing these principles only required a few small changes to standard practice in a college engineering course, it significantly increased student performance on exams.  Our findings highlight the potential for developing inexpensive, yet effective educational interventions that can be implemented worldwide.

Posted in Uncategorized | Comments Off on Improved STEM Learning with Cognitive Science