In the 2025 U.S. News & World Report “Best Colleges” rankings, Rice University's Electrical Engineering program climbed 12 spots to 16th nationwide.
Uncategorized
Three Papers at EMNLP 2024
Three DSP group papers have been accepted by The 2024 Conference on Empirical Methods in Natural Language Processing (EMNLP) Findings 2024 in Miami, Florida:
- MalAlgoQA: A Pedagogical Approach for Evaluating Counterfactual Reasoning Abilities by Naiming Liu, Shashank Sonkar, MyCo Le, and Richard G. Baraniuk
- Pedagogical Alignment of Large Language Models by Shashank Sonkar, Kangqi Ni, Sapana Chaudhary, and Richard G. Baraniuk
- The Student Data Paradox: Examining the Regressive Side Effects of Training LLMs for Personalized Learning by Shashank Sonkar, Naiming Liu, and Richard G. Baraniuk
Self-Consuming AI Resources
To help organize the growing literature on AI self-consuming feedback loops, we have launched a "Self-Consuming AI Resources" archive at dsp.rice.edu/ai-loops.
In the 2000s, the Rice DSP group managed a similar archive for the field of compressive sensing, and it grew to several thousand papers that were used by a large community of researchers. We're hoping that this archive can be similarly useful.
We are currently in the process of refining the materials on the page. We would greatly appreciate it if you would recommend missing or new literature. There is also a ton of missing media coverage, and we are slowly working toward gathering it all.
Email us at selfconsumingAI@gmail.com to add your latest work or that of others in this fast-moving area!
Self-Improving Diffusion Models with Synthetic Data
Self-Improving Diffusion Models with Synthetic Data
Sina Alemohammad, Ahmed Imtiaz Humayun, Richard Baraniuk
Rice University
Shruti Agarwal, John Collomosse
Adobe Research
arxiv.org/abs/2408.16333, 30 August 2024
Abstract: The artificial intelligence (AI) world is running out of real data for training increasingly large generative models, resulting in accelerating pressure to train on synthetic data.
Unfortunately, training new generative models with synthetic data from current or past generation models creates an autophagous (self-consuming) loop that degrades the quality and/or diversity of the synthetic data in what has been termed model autophagy disorder (MAD) and model collapse. Current thinking around model autophagy recommends that synthetic data is to be avoided for model training lest the system deteriorate into MADness.
In this paper, we take a different tack that treats synthetic data differently from real data.
Self-IMproving diffusion models with Synthetic data (SIMS) is a new training concept for diffusion models that uses self-synthesized data to provide negative guidance during the generation process to steer a model's generative process away from the non-ideal synthetic data manifold and towards the real data distribution. We demonstrate that SIMS is capable of self-improvement; it establishes new records based on the Fréchet inception distance (FID) metric for CIFAR-10 and ImageNet-64 generation and achieves competitive results on FFHQ-64 and ImageNet-512. Moreover, SIMS is, to the best of our knowledge, the first prophylactic generative AI algorithm that can be iteratively trained on self-generated synthetic data without going MAD. As a bonus, SIMS can adjust a diffusion model's synthetic data distribution to match any desired in-domain target distribution to help mitigate biases and ensure fairness.
The figure above illustrates that SIMS simultaneously improves diffusion modeling and synthesis performance while acting as a prophylactic against Model Autophagy Disorder (MAD). First row: Samples from a base diffusion model (EDM2-S) trained on 1.28M real images from the ImageNet-512 dataset (Fréchet inception distance, FID = 2.56). Second row: Samples from the base model after fine-tuning with 1.5M images synthesized from the base model, which degrades synthesis performance and pushes the model towards MADness (model collapse) (FID = 6.07). Third row: Samples from the base model after applying SIMS using the same self-generated synthetic data as in the second row (FID = 1.73).
Study Smarter with Google Gemini and OpenStax
Google's integration of its Gemini AI tool suite with OpenStax launched today.
"Starting today, Gemini can pull information from academic textbooks with OpenStax, an educational nonprofit initiative of Rice University. Let’s say you’re taking an economics class and need help with new concepts — just ask questions like "@OpenStax explain the concept of supply and demand." In seconds, you’ll get a clear, concise explanation complete with links to relevant textbook content."
When AI’s Output Is a Threat to AI Itself
The New York Times reported on some of our recent work on the dangers of self-consuming generative models:
- New York Times, "When A.I.’s Output Is a Threat to A.I. Itself," 26 August 2024
"As A.I.-generated data becomes harder to detect, it’s increasingly likely to be ingested by future A.I., leading to worse results."
DSP PhD Alum Randall Balestriero Accepts Faculty Position at Brown
Rice DSP PhD Randall Balestriero (PhD, 2021) has accepted an assistant professor position at Brown University in the Computer Science Department. Since graduating, he served as a postdoc with Yann LeCun at Meta/FAIR and at GQS, Citadel.
DSP PhD Alum Lorenzo Luzi Accepts Faculty Position at Rice Data2Knowledge (D2K) Lab
Rice DSP PhD and Valedictorian Lorenzo Luzi (PhD, 2024) has accepted an assistant teaching professor position in the Data 2 Knowledge (D2K) Lab and Department of Statistics at Rice University.
Two Papers at ICML 2024
Two DSP group papers have been accepted by the International Conference on Machine Learning (ICML) 2024 in Vienna, Austria
- "PIDformer: Transformer Meets Control Theory" by Tam Nguyen, César A. Uribe, Tan M. Nguyen, and Richard Baraniuk
- "Grokking Happens All the Time and Here is Why" by Ahmed Imtiaz Humayun, Randall Balestriero, and Richard Baraniuk
NSF invests $90M in innovative national scientific cyberinfrastructure for transforming STEM education
The U.S. National Science Foundation announced today a strategic investment of $90 million over five years in SafeInsights, a unique national scientific cyberinfrastructure aimed at transforming learning research and STEM education. Funded through the Mid-Scale Research Infrastructure Level-2 program (Mid-scale RI-2), SafeInsights is led by Prof. Richard Baraniuk at OpenStax at Rice University, who will oversee the implementation and launch of this new research infrastructure project of unprecedented scale and scope.
SafeInsights aims to serve as a central hub, facilitating research coordination and leveraging data across a range of major digital learning platforms that currently serve tens of millions of U.S. learners across education levels and science, technology, engineering and mathematics.
With its controlled and intuitive framework, unique privacy-protecting approach and emphasis on the inclusion of students, educators and researchers from diverse backgrounds, SafeInsights will enable extensive, long-term research on the predictors of effective learning, which are key to academic success and persistence.
Links for more information: