special talk

Powering the Future of Imaging and Signal Processing with Data-Driven Systems

Speaker

Saiprasad Ravishankar (EECS, University of Michigan)

Abstract

The data-driven learning of signal models including dictionaries, sparsifying transforms, low-rank models, tensor and manifold models, etc., is of great interest in many applications. In this talk, I will present my recent research that developed various highly efficient, scalable, and effective data-driven models and methodologies for signal processing, imaging, and other areas. First, I will discuss my work in the recently developed field of transform learning. Various interesting structures for sparsifying transforms such as well-conditioning, double sparsity, union-of-transforms, incoherence, rotation invariance, etc., will be considered, which enable their efficient and effective learning and usage. I will also discuss online transform learning that is particularly useful in applications involving large-scale or streaming data. I will demonstrate the high-quality results achieved by transform learning approaches in various applications including image and video denoising, and X-ray computed tomography (CT) or magnetic resonance image (MRI) reconstruction from limited data. The convergence properties of the transform learning and transform learning-driven image reconstruction methods will be presented. In the context of MRI, I will also showcase my work on data-driven learning of undersampling patterns in compressed sensing-type setups. Second, I will present my recent work on efficient methods for synthesis dictionary learning, including in combination with low-rank models. Newly proposed algorithms such as SOUP, DINO-KAT, and LASSI provide state-of-the-art results in applications such as dynamic MRI. The efficiency and effectiveness of the methods proposed in my research may benefit a wide range of additional applications in imaging, computer vision, neuroscience, and other areas requiring data-driven parsimonious models. Finally, I will provide a brief overview of ongoing works and future pathways for my research. This will include topics such as i) light field reconstruction from focal stacks, ii) online data-driven estimation of dynamic data from streaming, limited measurements, iii) physics-driven deep training of image reconstruction algorithms, iv) theory of blind compressed sensing, etc.

Where

CGU Math South Building

Misc. Information

Parking Functions and Friends

Speaker

Matthias Beck, San Francisco State University

Abstract

Imagine a one-way cul-de-sac with four parking spots. Initially they are all free, but there are four cars approaching the street, and they would all like to park. To make life interesting, every car has a parking preference, and we record the preferences in a sequence of four numbers; e.g., the sequence (2, 1, 1, 3) means that the first car would like to park at spot number 2, the second and third drivers prefer parking spot number 1, and the last car would like to part at slot number 3. The street is very narrow, so there is no way to back up. Now each car enters the street and approaches its preferred parking spot; if it is free, it parks there, and if not, it moves down the street to the first available spot. We call a sequence a parking function if all cars end up finding a parking spot. For example, our sequence (2, 1, 1, 3) is a parking function, whereas (1, 3, 3, 4) is not.
Naturally, we could ask about parking functions for any number of parking spots; we call this number the length of the parking function. A moment's thought reveals that there is one parking function of length 1, three parking functions of length 2, and sixteen parking functions of length 3. A beautiful theorem due to Konheim and Weiss says that there is a pattern to be found here: there are precisely (n+1)^{n-1} parking functions of length n. We will hint at a proof of this theorem and illustrate how it allows us to connect parking functions to seemingly unrelated objects, which happen to exhibit the same counting pattern: a certain set of hyperplanes in n-dimensional space first studied by Shi, and a certain family of mixed graphs, which we introduced in recent joint work with Ana Berrizbeitia, Michael Dairyko, Claudia Rodriguez, Amanda Ruiz, and Schuyler Veeneman.

Where

Davidson Lecture Hall, CMC

Misc. Information

Add to calendar

Claremont Graduate University | Claremont McKenna | Harvey Mudd | Pitzer | Pomona | Scripps
Proudly Serving Math Community at the Claremont Colleges Since 2007
Copyright © 2018 Claremont Center for the Mathematical Sciences

Syndicate content