Turning Textbook Highlighting into Time Well-Spent

College students love highlighting textbook passages while they study, and a team of researchers in three states will apply the latest techniques from machine learning and cognitive science to help turn that habit into time well-spent.

The four-year, $1 million research program at Rice University, the University of Colorado-Boulder, and the University of California-San Diego (UCSD) is one of 18 grants announced today by the National Science Foundation as part of the BRAIN Initiative, a coordinated research effort to accelerate the development of new neurotechnologies.

“Highlighting is something students naturally do on their own, and we want to create software that can use those highlights to improve both their comprehension and knowledge retention,” said Phillip Grimaldi, a co-investigator on the project and research scientist at the Rice University-based nonprofit textbook publisher OpenStax.

OpenStax uses philanthropic grants to produce high-quality, peer-reviewed textbooks that are free online and used by more than 680,000 college students at more than 2,000 colleges and universities. Grimaldi said the research team plans to use OpenStax books and learning tools in a number of ways.

First, they will ask OpenStax users to volunteer their highlights for a database that can be mined for clues about the volunteers’ understanding of the text. The researchers also will conduct laboratory experiments at Rice, UC-Boulder and UCSD to come up with new software that leverages the highlighted information to improve learning outcomes.

One reason the big-data approach is needed is that by itself, highlighting isn’t a very effective way to learn, Grimaldi said.

“A number of studies have shown that highlighting does little to improve learning outcomes, but students tend to think that it does, and it makes them feel good about studying,” he said. “At the same time, college students generally aren’t willing to change how they study, so we want to piggyback on what they’re already doing — spontaneously annotating passages of text — and turn that from a marginal activity into one that improves learning.”

This project is funded by a grant from the National Science Foundation’s Cyberlearning and Future Learning Technologies program. The researchers plan to create software that can predict how well students will perform on tests based on what the students highlight in their textbooks. The researchers will then create tools that use the material a student highlights to create customized quizzes and reviews for that student. The team also will try to determine the optimum time to give those quizzes and reviews to maximize comprehension and retention.

“Data from highlights supplied by OpenStax users will enable us to create tools that are both sensitive to each student’s interests and robust to poor highlighting choices,” said Richard Baraniuk, co-principal investigator on the project, founder and director of OpenStax and Rice’s Victor E. Cameron Professor of Engineering. “The idea is to reformulate selected passages into review questions that encourage the active reconstruction and elaboration of knowledge. The design and implementation of the tool will be informed by both randomized controlled studies within the innovative OpenStax textbook platform and in coordinated laboratory studies.”

CU-Boulder’s Mike Mozer is the principal investigator on the grant, and co-principal investigator Hal Pashler will lead the activities at UCSD.

Read the press release

Posted in Uncategorized | Comments Off on Turning Textbook Highlighting into Time Well-Spent

MOOC Adventures in Signal Processing

T. A. Baran, R. G. Baraniuk, A. V. Oppenheim, P. Prandoni, and M. Vetterli, “MOOC Adventures in Signal Processing: Bringing DSP to the Era of Massive Open Online Courses,” IEEE Signal Processing Magazine, Vol. 3, Issue 4, July 2016

Abstract: In higher education circles, 2012 may be known as the “year of the MOOC”; the launch of several high-profile initiatives, both for profit (Coursera, Udacity) and not for profit (edX), created an electrified feeling in the community, with massive open online courses (MOOCs) becoming the hottest new topic in academic conversation. The sudden attention was perhaps slightly forgetful of many notable attempts at distance learning that occurred before, from campus TV networks to well-organized online repositories of teaching material. The new mode of delivery, however, was ushered in by a few large-scale computer science courses, whose broad success triggered significant media attention.

Paper at IEEE Explore
Preprint at Rice DSP

Posted in Uncategorized | Comments Off on MOOC Adventures in Signal Processing

OpenStax Calculus by Gil Strang

This semester, OpenStax is excited to announce a slate of new math titles, including Calculus Volumes 1, 2, and 3 by Prof. Gil Strang of MIT.

We’ve also revamped Precalculus, College Algebra, and Algebra and Trigonometry with a full layout redesign, and they are now available for free online and at low cost in print.

Posted in Uncategorized | Comments Off on OpenStax Calculus by Gil Strang

FlatCam: Using Computation to Replace Lenses

M. S. Asif, A. Ayremlou, A. Sankaranarayanan, A. Veeraraghavan, and R. G. Baraniuk, “FlatCam: Thin, Bare-Sensor Cameras using Coded Aperture and Computation,” arXiv preprint arxiv.org/abs/1509.00116, 2015

FlatCam is a thin form-factor lensless camera that consists of a coded mask placed on top of a bare, conventional sensor array. Unlike a traditional, lens-based camera where an image of the scene is directly recorded on the sensor pixels, each pixel in FlatCam records a linear combination of light from multiple scene elements. A computational algorithm is then used to demultiplex the recorded measurements and reconstruct an image of the scene. FlatCam is an instance of a coded aperture imaging system; however, unlike the vast majority of related work, we place the coded mask extremely close to the image sensor that can enable a thin system. We employ a separable mask to ensure that both calibration and image reconstruction are scalable in terms of memory requirements and computational complexity. We demonstrate the potential of the FlatCam design using two prototypes: one at visible wavelengths and one at infrared wavelengths.

FlatCam architecture. (a) Every light source within the camera field-of-view contributes to every pixel in the multiplexed image formed on the sensor. A computational algorithm reconstructs the image of the scene. Inset shows the mask-sensor assembly of our prototype in which a binary, coded mask is placed 0.5mm away from an off-the-shelf digital image sensor. (b) An example of sensor measurements and the image reconstructed by solving a computational inverse problem.

Press coverage:

Posted in Uncategorized | Comments Off on FlatCam: Using Computation to Replace Lenses

OpenStax Passes the 20% Threshold

Free textbooks from Rice University-based publisher OpenStax are now in use at one-in-five degree-granting U.S. colleges and universities and have already saved college students $39 million in the 2015-16 academic year.  More info is available here.

Posted in Uncategorized | Comments Off on OpenStax Passes the 20% Threshold

Thomson Reuters’ Highly Cited Researcher for 2015

Richard Baraniuk was one of 3126 researchers in the sciences and social sciences who authored papers that ranked among the top 1% most cited for their subject field and year of publication in Thomson Reuters’ academic citation indexing and search service, Web of Knowledge.

In addition, he was selected as one of The World’s Most Influential Scientific Minds 2015.

Posted in Uncategorized | Comments Off on Thomson Reuters’ Highly Cited Researcher for 2015

Open Postdoc Positions

Thanks to some recent funding from NSF, the DARPA REVEAL program, the IARPA MICrONS program, and several philanthropic foundations, we’re hiring postdocs in three different areas:

Rice DSP postdoc alums have gone on to academic positions at Cornell, Columbia, CMU, Georgia Tech, U. Maryland, U. Wisconsin, U. Minnesota, NCSU, McGill, EPFL, and KU-Leuven.  Email <richb at rice dot edu> for more information.

Posted in Uncategorized | Comments Off on Open Postdoc Positions

IEEE Signal Processing Society 2015 Best Paper Award


M. A. Davenport, P. T. Boufounos, M. B. Wakin, and R. G. Baraniuk, “Signal Processing With Compressive Measurements,” IEEE Journal of Selected ​Topics in Signal Processing, Vol. 4, No. 2, April 2010

Abstract:  The recently introduced theory of compressive sensing enables the recovery of sparse or compressible signals from a small set of nonadaptive, linear measurements.  If properly chosen, the number of measurements can be much smaller than the number of Nyquist-rate samples. Interestingly, it has been shown that random projections are a near-optimal measurement scheme. This has inspired the design of hardware systems that directly implement random measurement protocols. However, despite the intense focus of the community on signal recovery, many (if not most) signal processing problems do not require full signal recovery. In this paper, we take some first steps in the direction of solving inference problems—such as detection, classification, or estimation —and filtering problems using only compressive measurements and without ever reconstructing the signals involved.  We provide theoretical bounds along with experimental results.

IEEE Xplore final version
Preprint version
Award Information

Posted in Uncategorized | Comments Off on IEEE Signal Processing Society 2015 Best Paper Award

OpenStax College To Save Students $25 million This School Year

OpenStax College today unveiled three new textbooks — Algebra and Trigonometry, College Algebra and Chemistry.  Our growing catalog of free textbooks (15 titles to date) will save 260,000 students at nearly 2000 institutions an estimated $25 million this academic year alone.

Our growth curve has quieted most of those who doubted the sustainability of open education. Today, six times more students are using our books than were just two years ago, and we are well ahead of our goal to eventually save students $120 million per year.

Thanks to the William and Flora Hewlett Foundation, the Laura and John Arnold Foundation, the Bill & Melinda Gates Foundation, the 20 Million Minds Foundation, the Maxfield Foundation, the Calvin K. Kanzanjian Foundation, the Bill and Stephanie Sick Fund and the Leon Lowenstein Foundation for all their support of OpenStax!

In the news:

Posted in Uncategorized | Comments Off on OpenStax College To Save Students $25 million This School Year

A Probabilistic Theory of Deep Learning

A. Patel, T. Nguyen, and R. G. Baraniuk, “A Probabilistic Theory of Deep Learning,” arXiv preprint, arxiv.org/abs/1504.00641, 2 April 2015.  Updated version from NIPS 2016.

Abstract: A grand challenge in machine learning is the development of computational algorithms that match or outperform humans in perceptual inference tasks such as visual object and speech recognition.  The key factor complicating such tasks is the presence of numerous nuisance variables, for instance, the unknown object position, orientation, and scale in object recognition or the unknown voice pronunciation, pitch, and speed in speech recognition.  Recently, a new breed of deep learning algorithms have emerged for high-nuisance inference tasks; they are constructed from many layers of alternating linear and nonlinear processing units and are trained using large-scale algorithms and massive amounts of training data.  The recent success of deep learning systems is impressive — they now routinely yield pattern recognition systems with near- or super-human capabilities — but a fundamental question remains:  Why do they work? Intuitions abound, but a coherent framework for understanding, analyzing, and synthesizing deep learning architectures has remained elusive.

We answer this question by developing a new probabilistic framework for deep learning based on a Bayesian generative probabilistic model that explicitly captures variation due to nuisance variables.  The graphical structure of the model enables it to be learned from data using classical expectation-maximization techniques.  Furthermore, by relaxing the generative model to a discriminative one, we can recover two of the current leading deep learning systems, deep convolutional neural networks (DCNs) and random decision forests (RDFs), providing insights into their successes and shortcomings as well as a principled route to their improvement.

The figure below illustrates an example of a mapping from our Deep Rendering Model (DRM) to its factor graph to a Deep Convolutional Network (DCN) at one level of abstraction.  The factor graph representation of the DRM supports efficient inference algorithms such as max-sum message passing.  The computational algorithm that implements the max-sum message passing algorithm matches that of a DCN.

Posted in Uncategorized | Comments Off on A Probabilistic Theory of Deep Learning