Uncategorized

ELEC301x - Discrete Time Signals and Systems

Enter the world of signal processing: analyze and extract meaning from the signals around us!

About the Course:  Technological innovations have revolutionized the way we view and interact with the world around us. Editing a photo, re-mixing a song, automatically measuring and adjusting chemical concentrations in a tank: each of these tasks requires real-world data to be captured by a computer and then manipulated digitally to extract the salient information. Ever wonder how signals from the physical world are sampled, stored, and processed without losing the information required to make predictions and extract meaning from the data? Students will find out in this rigorous mathematical introduction to the engineering field of signal processing: the study of signals and systems that extract information from the world around us. This course will teach students to analyze discrete-time signals and systems in both the time and frequency domains. Students will learn convolution, discrete Fourier transforms, the z-transform, and digital filtering. Students will apply these concepts to build a digital audio synthesizer in MATLAB. Prerequisites include strong problem solving skills, the ability to understand mathematical representations of physical systems, and advanced mathematical background (one-dimensional integration, matrices, vectors, basic linear algebra, imaginary numbers, and sum and series notation). This course is an excerpt from an advanced undergraduate class at Rice University taught to all electrical and computer engineering majors.

Sign up now and join in the fun!

R. G. Baraniuk, "Opening Education," to appear in The Bridge, National Academy of Engineering, 2013.

Abstract:  The world is increasingly connected, yet educational systems cling to the disconnected past. The open education movement provides new mechanisms to democratize education by interconnecting ideas, learners, and instructors in new kinds of constructs that replace traditional textbooks, courses, and certifications. Open education has the potential to realize the dream of providing not only universal access to all the world’s knowledge but also the tools required to acquire it. The result will be a revolutionary advance in the world’s standard of education at all levels.

A. S. Lan, A. E. Waters, C. Studer, R. G. Baraniuk, "Sparse Factor Analysis for Learning and Content Analytics," to appear in Journal of Machine Learning Research, 2014

Abstract:  We develop a new model and algorithms for machine learning-based learning analytics, which estimate a learner’s knowledge of the concepts underlying a domain, and content analytics, which estimate the relationships among a collection of questions and those concepts. Our model represents the probability that a learner provides the correct response to a question in terms of three factors: their understanding of a set of underlying concepts, the concepts involved in each question, and each question’s intrinsic difficulty. We estimate these factors given the graded responses to a collection of questions. The underlying estimation problem is ill-posed in general, especially when the only a subset of the questions are answered. The key observation that enables a well-posed solution is the fact that typical educational domains of interest involve only a small number of key concepts. Leveraging this observation, we develop both a bi-convex maximum-likelihood and a Bayesian solution to the resulting SPARse Factor Analysis (SPARFA) problem. We also incorporate user-defined tags on questions to facilitate the interpretability of the estimated factors. Experiments with synthetic and real-world data demonstrate the efficacy of our approach. Finally, we make a connection between SPARFA and noisy, binary-valued (1-bit) dictionary learning that is of independent interest.

The above example illustrates the result of applying SPARFA to data from a grade 8 science course in STEMscopes, an online science curriculum program. The data input to SPARFA consisted solely of whether a student answered a given potential homework or exam question correctly or incorrectly. From these limited and quantized data, SPARFA automatically estimates (a) a collection (in this case five) of abstract “concepts” that underlie the course (“Concept 3” is illustrated here); (b) a graph that links each question (rectangular box) to one or more of the concepts (circles), with thicker links indicating a stronger association with the concept; (c) the intrinsic difficulty of each question,  indicated by the number in each box; (d) descriptive word tags drawn from the text of the questions, their solutions, and instructor-provided metadata that make each concept interpretable (as shown for Concept 3); and (e) each student’s knowledge profile, which indicates both estimated knowledge of each concept and concepts ripe for remediation or enrichment.

Some follow-on papers that extend the SPARFA framework.
Get your SPARFA merchandise while it's hot!

Revitalizing education at all levels and in all subject areas is a major global priority. In order to properly educate the leaders of tomorrow, we must move beyond the centuries-old, ingrained paradigm of education that views the process of learning as “one-way street” in which knowledge is transmitted from teacher to learner via paper textbooks and lectures. Instead, we must provide learners with tools to effectively engage in self-regulated learning outside the classroom.

Over the past decades, significant progress has been made on computer-based personalized learning that is responsive to the needs, skills, and characteristics of individual students. A personalized learning system closes the learning feedback loop by continuously monitoring and analyzing learner interactions with learning resources in order to assess progress, and providing timely remediation, enrichment, or practice based on that analysis. Recently, learning analytics and personalized learning systems have leapt from the research lab to the marketplace. Indeed, much of the ed-tech startup activity and investment has been in this space.

Despite this early success, many important issues and challenges remain to be surmounted before personalized learning reaches the mainstream. The goal of this (annual) workshop was to bring together the intellectual leaders of this new movement in order to exchange ideas, network, and plot a course to the future. A particular focus was on how machine learning and “big data” have the potential to create new efficiencies in time and cost and significantly improve learning outcomes.

The workshop took place on 22 April 2013 on the Rice University campus.

Presenters
David Kuntz, VP-Research for Knewton
Steven Ritter, VP-Research for Carnegie Learning
Jascha Sohl-Dickstein, Khan Academy
David Pritchard, MIT
Neil Heffernan, WPI
Winslow Burleson, ASU
David Eagleman, BCM
Dan Wallach, Rice
Anna Rafferty, UC-Berkeley
Zach Pardos, MIT
Andrew Butler, Duke
Richard Baraniuk, Rice

Archived webcast is available here

Rice University is ranked No. 1 among the world’s top universities in the field of natural sciences and engineering for the quality and impact of its scientific publications, according to the Leiden rankings for 2013.

The Leiden rankings measure the scientific performance of 500 major universities around the world. The rankings are calculated by the Centre for Science and Technology Studies at Leiden University in Netherlands. The 2013 rankings are based on indexed publications from 2008 to 2011 from the Web of Science bibliographic database produced by Thomson Reuters. Web of Science is a reference tool for retrieving accurate citation counts.

Read more

Connexions, OpenStax College, and OpenStax Tutor from the Rice University center for Digital Learning and Scholarship (RDLS) will be well represented at SXSWEdu 2013 in Austin, Texas this week.

1:30-2:30pm on 4 March 2013
"Advances in Open Textbook Publishing Technology,"
Phil Schatz, Connexions
Ed Woodward, Connexions
Kathi Fletcher, Shuttleworth Foundation

3-4pm on 4 March 2013
"Personalized Learning Systems -
Worthy of the Hype
"?
Richard Baraniuk, RDLS Director
Andy Butler, Duke University

10:30-11:30 on 6 March 2013
"Open Education - Still a Chasm to Cross"
David Harris, OpenStax College
Daniel Williamson, Connexions

C. Hegde, A. C. Sankaranarayanan, W. Yin, and R. G. Baraniuk, "A Convex Approach for Learning Near-Isometric Linear Embeddings," submitted to Journal of Machine Learning Research, 2012

Abstract: We propose a novel framework for the deterministic construction of linear, near-isometric embeddings of a nite set of data points. Given a set of training points X, we consider the secant set S(X) that consists of all pairwise di fference vectors of X, normalized to lie on the unit sphere. We formulate an ane rank minimization problem to construct a matrix that preserves the norms of all the vectors in S(X) up to a distortion parameter .  While affine rank minimization is NP-hard, we show that this problem can be relaxed to a convex formulation that can be solved using a tractable semide nite program (SDP).  In order to enable scalability of our proposed SDP to very large-scale problems, we adopt a two-stage approach. First, in order to reduce compute time, we develop a novel algorithm based on the Alternating Direction Method of Multipliers (ADMM) that we call Nuclear norm minimization with Max-norm constraints (NuMax) to solve the SDP. Second, we develop a greedy, approximate version of NuMax based on the column generation method commonly used to solve large-scale linear programs. We demonstrate that our framework is useful for a number of applications in machine learning and signal processing via a range of experiments on large-scale synthetic and real datasets.

The above example illustrates the superior performance of NuMax on the MNIST dataset of handwritten digits (see (a)).  We construct a training dataset S(X) comprising S = 3000 secant images and estimate the variation of the isometry constant that measures the distortion of the embedding as a function of the number of measurements M. The results of this experiment are plotted in (b) in comparison to principal components analysis (PCA) and random projections.  For a distortion parameter =0.2, NuMax produces an embedding with 8x fewer measurements than PCA.  In essence, NuMax provides the best
possible rate-distortion curve in terms of compressing the given image database.

A computationally efficient NuMax toolbox is available here.

Rice University announced the creation of the Rice Center for Digital Learning and Scholarship (RDLS) to bring an array of online education initiatives under one banner.  RDLS will comprise:

To learn more, see the press release and center website.