Uncategorized


M. A. Davenport, P. T. Boufounos, M. B. Wakin, and R. G. Baraniuk, "Signal Processing With Compressive Measurements," IEEE Journal of Selected ​Topics in Signal Processing, Vol. 4, No. 2, April 2010

Abstract:  The recently introduced theory of compressive sensing enables the recovery of sparse or compressible signals from a small set of nonadaptive, linear measurements.  If properly chosen, the number of measurements can be much smaller than the number of Nyquist-rate samples. Interestingly, it has been shown that random projections are a near-optimal measurement scheme. This has inspired the design of hardware systems that directly implement random measurement protocols. However, despite the intense focus of the community on signal recovery, many (if not most) signal processing problems do not require full signal recovery. In this paper, we take some first steps in the direction of solving inference problems—such as detection, classification, or estimation —and filtering problems using only compressive measurements and without ever reconstructing the signals involved.  We provide theoretical bounds along with experimental results.

IEEE Xplore final version
Preprint version
Award Information

OpenStax College today unveiled three new textbooks -- Algebra and Trigonometry, College Algebra and Chemistry.  Our growing catalog of free textbooks (15 titles to date) will save 260,000 students at nearly 2000 institutions an estimated $25 million this academic year alone.

Our growth curve has quieted most of those who doubted the sustainability of open education. Today, six times more students are using our books than were just two years ago, and we are well ahead of our goal to eventually save students $120 million per year.

Thanks to the William and Flora Hewlett Foundation, the Laura and John Arnold Foundation, the Bill & Melinda Gates Foundation, the 20 Million Minds Foundation, the Maxfield Foundation, the Calvin K. Kanzanjian Foundation, the Bill and Stephanie Sick Fund and the Leon Lowenstein Foundation for all their support of OpenStax!

In the news:

A. Patel, T. Nguyen, and R. G. Baraniuk, "A Probabilistic Theory of Deep Learning," arXiv preprint, arxiv.org/abs/1504.00641, 2 April 2015.  Updated version from NIPS 2016.

Abstract: A grand challenge in machine learning is the development of computational algorithms that match or outperform humans in perceptual inference tasks such as visual object and speech recognition.  The key factor complicating such tasks is the presence of numerous nuisance variables, for instance, the unknown object position, orientation, and scale in object recognition or the unknown voice pronunciation, pitch, and speed in speech recognition.  Recently, a new breed of deep learning algorithms have emerged for high-nuisance inference tasks; they are constructed from many layers of alternating linear and nonlinear processing units and are trained using large-scale algorithms and massive amounts of training data.  The recent success of deep learning systems is impressive -- they now routinely yield pattern recognition systems with near- or super-human capabilities -- but a fundamental question remains:  Why do they work? Intuitions abound, but a coherent framework for understanding, analyzing, and synthesizing deep learning architectures has remained elusive.

We answer this question by developing a new probabilistic framework for deep learning based on a Bayesian generative probabilistic model that explicitly captures variation due to nuisance variables.  The graphical structure of the model enables it to be learned from data using classical expectation-maximization techniques.  Furthermore, by relaxing the generative model to a discriminative one, we can recover two of the current leading deep learning systems, deep convolutional neural networks (DCNs) and random decision forests (RDFs), providing insights into their successes and shortcomings as well as a principled route to their improvement.

The figure below illustrates an example of a mapping from our Deep Rendering Model (DRM) to its factor graph to a Deep Convolutional Network (DCN) at one level of abstraction.  The factor graph representation of the DRM supports efficient inference algorithms such as max-sum message passing.  The computational algorithm that implements the max-sum message passing algorithm matches that of a DCN.

By Ann Carrns, 25 February 2015

College students could save an average of $128 a course if traditional textbooks were replaced with free or low-cost “open-source” electronic versions, a new report finds.

Textbook costs are particularly burdensome for students at two-year community colleges; the cost, more than $1,300, is about 40 percent of the average cost of tuition, according to the College Board.

Jennifer Swain, 21, a student at South Florida State College, said her instructor for a physics class used an open-source textbook (College Physics, from OpenStax). She likes that she can download it onto an app on her iPad that allows her to highlight sections of text, just as she could in a traditional textbook — but this one is free, whereas a comparable hard copy physics text would cost about $250. A classmate, Ashley Edmonson, 24, said it’s convenient to access the textbook from any device, so she doesn’t have to lug around another tome: “They’re really hard to carry,” she said.

Read the entire article.

The Rice/OpenStax Workshop on Personalized Learning will be held Wednesday, 1 April 2015 on the Rice University campus in Houston, Texas.

The previous two workshops in 2013 and 2014 have focused on Scaling Up Success in computer-based learning and Bridging the Laboratory-Classroom Divide in cognitive science and have featured Steve Ritter, David Kuntz, Mark McDaniel, Jeff Karpicke, Kurt Van Lehn, Michael Mozer, David Pritchard, Neil Heffernan, Zach Pardos, and Winslow Burleson.

This is the third incarnation of the workshop. We plan to focus on Modeling and Correcting Student Understanding through two related themes: (1) assessing what students know and forming an accurate picture of student understanding, and (2) determining the causes of student misunderstanding as well as methods for remediation. The workshop will be attended by leading experts in educational psychology, computer science, educational data mining, and cognitive science.

Registration information

ELEC301x - Discrete Time Signals and Systems (Part 1 - Time Domain)
ELEC301x - Discrete Time Signals and Systems (Part 2 - Frequency Domain
)

Enter the world of signal processing: analyze and extract meaning from the signals around us!

About the Course:  Technological innovations have revolutionized the way we view and interact with the world around us. Editing a photo, re-mixing a song, automatically measuring and adjusting chemical concentrations in a tank: each of these tasks requires real-world data to be captured by a computer and then manipulated digitally to extract the salient information. Ever wonder how signals from the physical world are sampled, stored, and processed without losing the information required to make predictions and extract meaning from the data? Students will find out in this rigorous mathematical introduction to the engineering field of signal processing: the study of signals and systems that extract information from the world around us. This course will teach students to analyze discrete-time signals and systems in both the time and frequency domains. Students will learn convolution, discrete Fourier transforms, the z-transform, and digital filtering. Students will apply these concepts to build a digital audio synthesizer in MATLAB. Prerequisites include strong problem solving skills, the ability to understand mathematical representations of physical systems, and advanced mathematical background (one-dimensional integration, matrices, vectors, basic linear algebra, imaginary numbers, and sum and series notation). This course is an excerpt from an advanced undergraduate class at Rice University taught to all electrical and computer engineering majors.

This offering of the course is split into two convenient mini-courses.  Part 1 covers signals and systems from a time-domain perspective, while Part 2 takes a frequency-domain perspective.

Sign up now for Part 1 and Part 2 and join in the fun!

T. Goldstein, C. Studer, and R. G. Baraniuk, “A Field Guide to Forward-Backward Splitting with a FASTA Implementation,” arXiv preprint, arxiv.org/abs/1411.3406, December 2014

Non-differentiable and constrained optimization play a key role in machine learning, signal and image processing, communications, and beyond.  For high-dimensional minimization problems involving large datasets or many unknowns, the forward-backward splitting (FBS) method (also called the proximal gradient method) provides a simple, practical solver.  Despite its apparently simplicity, the performance of the forward-backward splitting is highly sensitive to implementation details.  Our research explores FBS with a special emphasis on practical implementation concerns and considering issues such as stepsize selection, acceleration, stopping conditions, and initialization.  Our new solver FASTA (short for Fast Adaptive Shrinkage/Thresholding Algorithm) incorporates many variations of forward-backward splitting and provides a simple interface for applying FBS to a broad range of problems.

Software for FASTA is available here

Example:  "Two Moons" data set for testing machine learning (classification) algorithms

Convergence of FASTA (red) on Two Moons versus more conventional forward-backward splitting (FBS) techniques

A. Lan, D. Vats, A. Waters, and R. G. Baraniuk, "Mathematical Language Processing: Automatic Grading and Feedback for Open Response Mathematical Questions,"
ACM Conference on Learning at Scale, Vancouver, March 2015.

Abstract:  While computer and communication technologies have provided effective means to scale up many aspects of education, the submission and grading of assessments such as homework assignments and tests remains a weak link. In this paper, we study the problem of automatically grading the kinds of open response mathematical questions that figure prominently in STEM (science, technology, engineering, and mathematics) courses. Our data-driven framework for mathematical language processing (MLP) leverages solution data from a large number of learners to evaluate the correctness of their solutions, assign partial-credit scores, and provide feedback to each learner on the likely locations of any errors.

MLP takes inspiration from the success of natural language processing for text data and comprises three main steps. First, we convert each solution to an open response mathematical question into a series of numerical features. Second, we cluster the features from several solutions to uncover the structures of correct, partially correct, and incorrect solutions. We develop two different clustering approaches, one that leverages
generic clustering algorithms and one based on Bayesian nonparametrics. Third, we automatically grade the remaining (potentially large number of) solutions based on their assigned cluster and one instructor-provided grade per cluster. As a bonus, we can track the cluster assignment of each step of a multistep solution and determine when it departs from a cluster of correct solutions, which enables us to indicate the likely locations of errors to learners. We test and validate MLP on real-world MOOC data to demonstrate how it can substantially reduce the human effort required in large-scale educational platforms.

The figure below illustrates the clusters obtained by MLP to the solutions of 100 learners to four different mathematical questions (two from algebra and two from Fourier analysis).  Each node corresponds to a solution. Nodes with the same color correspond to solutions that are estimated to be in the same cluster. The thickness of the edge between two solutions is proportional to their similarity score. Boxed solutions are correct; others have varying degrees of correctness.

The examples below demonstrate the generation of real-time feedback generation by MLP while learners enter their solutions. After each expression, we compute both the probability that the learner’s solution belongs to a cluster that does not have full credit (3 points) and the learner’s expected grade. An alert is generated when the expected credit is less than full credit.

A. Waters, D. Tinapple, and R. G. Baraniuk, "BayesRank: A Bayesian Approach to Ranked Peer Grading," ACM Conference on Learning at Scale, Vancouver, March 2015.

Abstract: Advances in online and computer supported education afford exciting opportunities to revolutionize the classroom, while also presenting a number of new challenges not faced in traditional educational settings. Foremost among these challenges is the problem of accurately and efficiently evaluating learner work as the class size grows, which is directly related to the larger goal of providing quality, timely, and actionable formative feedback. Recently there has been a surge in interest in using peer grading methods coupled with machine learning to accurately and fairly evaluate learner work while alleviating the instructor bottleneck and grading overload. Prior work in peer grading almost exclusively focuses on numerically scored grades – either real-valued or ordinal. In this work, we consider the implications of peer ranking in which learners rank a small subset of peer work from strongest to weakest, and propose new types of computational analyses that can be applied to this ranking data. We adopt a Bayesian approach to the ranked peer grading problem and develop a novel model and method for utilizing ranked peer-grading data. We additionally develop a novel procedure for adaptively identifying which work should be ranked by particular peers in order to dynamically resolve ambiguity in the data and rapidly resolve a clearer picture of learner performance. We showcase our results on both synthetic and several real-world educational datasets.

The figures below compares BayesRank to the known ground truth item ordering in a synthetic experiment using Kendall’s tau metric, which measures the general agreement between two ordered sets.  Kendall’s tau metric looks at each pair in one ranking and compares the same items in the second ranking to check for consistency.  D_tau = +1/-1 corresponds to perfect agreement/disagreement.  The two curves correspond to BayesRank with observations generated using adaptive assignment (red) and random item assignment (blue); we plot the tau metric as a function of the size of the class N (left) and as a function of the number of items K assigned to each grader.  In all cases, BayesRank random assignment achieves significantly better performance than traditional random assignment.


Richard Baraniuk, the Victor E. Cameron Professor of Electrical and Computer Engineering at Rice University, has been named recipient of the 2014 IEEE Signal Processing Society Technical Achievement Award for "contributions to the theory and applications of sparsity and compressive sensing."

The Technical Achievement Award honors a person who, over a period of years, has made outstanding technical contributions to theory and/or practice as demonstrated by publications, patents, or recognized impact on the field.  The award will be presented at ICASSP 2015 in Brisbane, Australia, where Baraniuk is also a plenary speaker presenting his work on machine learning for open education.