M. S. Asif, A. Ayremlou, A. Sankaranarayanan, A. Veeraraghavan, and R. G. Baraniuk, “FlatCam: Thin, Bare-Sensor Cameras using Coded Aperture and Computation,” arXiv preprint arxiv.org/abs/1509.00116, 2015
FlatCam is a thin form-factor lensless camera that consists of a coded mask placed on top of a bare, conventional sensor array. Unlike a traditional, lens-based camera where an image of the scene is directly recorded on the sensor pixels, each pixel in FlatCam records a linear combination of light from multiple scene elements. A computational algorithm is then used to demultiplex the recorded measurements and reconstruct an image of the scene. FlatCam is an instance of a coded aperture imaging system; however, unlike the vast majority of related work, we place the coded mask extremely close to the image sensor that can enable a thin system. We employ a separable mask to ensure that both calibration and image reconstruction are scalable in terms of memory requirements and computational complexity. We demonstrate the potential of the FlatCam design using two prototypes: one at visible wavelengths and one at infrared wavelengths.
FlatCam architecture. (a) Every light source within the camera field-of-view contributes to every pixel in the multiplexed image formed on the sensor. A computational algorithm reconstructs the image of the scene. Inset shows the mask-sensor assembly of our prototype in which a binary, coded mask is placed 0.5mm away from an off-the-shelf digital image sensor. (b) An example of sensor measurements and the image reconstructed by solving a computational inverse problem.
Free textbooks from Rice University-based publisher OpenStax are now in use at one-in-five degree-granting U.S. colleges and universities and have already saved college students $39 million in the 2015-16 academic year. More info is available here.
Richard Baraniuk was one of 3126 researchers in the sciences and social sciences who authored papers that ranked among the top 1% most cited for their subject field and year of publication in Thomson Reuters’ academic citation indexing and search service, Web of Knowledge.
In addition, he was selected as one of The World’s Most Influential Scientific Minds 2015.
- Deep learning theory, with applications to object recognition, signal processing, and neuroscience
- Machine learning for personalized education to improve learning outcomes in conjunction with the OpenStax Tutor team
- Computational imaging for flat, lensless cameras, looking around corners, light field cameras, time of flight cameras, phase retrieval, and imaging through scattering media
Rice DSP postdoc alums have gone on to academic positions at Cornell, Columbia, CMU, Georgia Tech, U. Maryland, U. Wisconsin, U. Minnesota, NCSU, McGill, EPFL, and KU-Leuven. Email <richb at rice dot edu> for more information.
M. A. Davenport, P. T. Boufounos, M. B. Wakin, and R. G. Baraniuk, “Signal Processing With Compressive Measurements,” IEEE Journal of Selected Topics in Signal Processing, Vol. 4, No. 2, April 2010
Abstract: The recently introduced theory of compressive sensing enables the recovery of sparse or compressible signals from a small set of nonadaptive, linear measurements. If properly chosen, the number of measurements can be much smaller than the number of Nyquist-rate samples. Interestingly, it has been shown that random projections are a near-optimal measurement scheme. This has inspired the design of hardware systems that directly implement random measurement protocols. However, despite the intense focus of the community on signal recovery, many (if not most) signal processing problems do not require full signal recovery. In this paper, we take some first steps in the direction of solving inference problems—such as detection, classification, or estimation —and filtering problems using only compressive measurements and without ever reconstructing the signals involved. We provide theoretical bounds along with experimental results.
OpenStax College today unveiled three new textbooks — Algebra and Trigonometry, College Algebra and Chemistry. Our growing catalog of free textbooks (15 titles to date) will save 260,000 students at nearly 2000 institutions an estimated $25 million this academic year alone.
Our growth curve has quieted most of those who doubted the sustainability of open education. Today, six times more students are using our books than were just two years ago, and we are well ahead of our goal to eventually save students $120 million per year.
Thanks to the William and Flora Hewlett Foundation, the Laura and John Arnold Foundation, the Bill & Melinda Gates Foundation, the 20 Million Minds Foundation, the Maxfield Foundation, the Calvin K. Kanzanjian Foundation, the Bill and Stephanie Sick Fund and the Leon Lowenstein Foundation for all their support of OpenStax!
In the news:
- “Are Savvy Students Sabotaging Big Textbook?” Bloomberg News
- “Is This the Solution to Crazy High Textbook Prices?” TIME
- “Open Texts Predicted to Save Students $25 Million,” eCampus News
- “How College Students Can Save Money on Pricey Textbooks,” Washington Post
- “Textbooks Are Going Digital, But Will That Put College Bookstores Out Of Business?” Forbes
- “OpenStax Releases 3 New Free College Textbooks,” Campus Technology
- “These 10 Trends are Shaping the Future of Education,” EducationDIVE
- “Triaging Textbook Costs,” Inside Education Ed
- Interview on KOMO-AM, Seattle
Abstract: A grand challenge in machine learning is the development of computational algorithms that match or outperform humans in perceptual inference tasks such as visual object and speech recognition. The key factor complicating such tasks is the presence of numerous nuisance variables, for instance, the unknown object position, orientation, and scale in object recognition or the unknown voice pronunciation, pitch, and speed in speech recognition. Recently, a new breed of deep learning algorithms have emerged for high-nuisance inference tasks; they are constructed from many layers of alternating linear and nonlinear processing units and are trained using large-scale algorithms and massive amounts of training data. The recent success of deep learning systems is impressive — they now routinely yield pattern recognition systems with near- or super-human capabilities — but a fundamental question remains: Why do they work? Intuitions abound, but a coherent framework for understanding, analyzing, and synthesizing deep learning architectures has remained elusive.
We answer this question by developing a new probabilistic framework for deep learning based on a Bayesian generative probabilistic model that explicitly captures variation due to nuisance variables. The graphical structure of the model enables it to be learned from data using classical expectation-maximization techniques. Furthermore, by relaxing the generative model to a discriminative one, we can recover two of the current leading deep learning systems, deep convolutional neural networks (DCNs) and random decision forests (RDFs), providing insights into their successes and shortcomings as well as a principled route to their improvement.
The figure below illustrates an example of a mapping from our Deep Rendering Model (DRM) to its factor graph to a Deep Convolutional Network (DCN) at one level of abstraction. The factor graph representation of the DRM supports efficient inference algorithms such as max-sum message passing. The computational algorithm that implements the max-sum message passing algorithm matches that of a DCN.
College students could save an average of $128 a course if traditional textbooks were replaced with free or low-cost “open-source” electronic versions, a new report finds.
Textbook costs are particularly burdensome for students at two-year community colleges; the cost, more than $1,300, is about 40 percent of the average cost of tuition, according to the College Board.
Jennifer Swain, 21, a student at South Florida State College, said her instructor for a physics class used an open-source textbook (College Physics, from OpenStax). She likes that she can download it onto an app on her iPad that allows her to highlight sections of text, just as she could in a traditional textbook — but this one is free, whereas a comparable hard copy physics text would cost about $250. A classmate, Ashley Edmonson, 24, said it’s convenient to access the textbook from any device, so she doesn’t have to lug around another tome: “They’re really hard to carry,” she said.
The Rice/OpenStax Workshop on Personalized Learning will be held Wednesday, 1 April 2015 on the Rice University campus in Houston, Texas.
The previous two workshops in 2013 and 2014 have focused on Scaling Up Success in computer-based learning and Bridging the Laboratory-Classroom Divide in cognitive science and have featured Steve Ritter, David Kuntz, Mark McDaniel, Jeff Karpicke, Kurt Van Lehn, Michael Mozer, David Pritchard, Neil Heffernan, Zach Pardos, and Winslow Burleson.
This is the third incarnation of the workshop. We plan to focus on Modeling and Correcting Student Understanding through two related themes: (1) assessing what students know and forming an accurate picture of student understanding, and (2) determining the causes of student misunderstanding as well as methods for remediation. The workshop will be attended by leading experts in educational psychology, computer science, educational data mining, and cognitive science.