CBMS 2017 Home
 
CBMS Program
 
CBMS Registered Participants
 
CBMS Lecture Slides
 
NMAS Program
 
NMAS Speakers
 
NMAS Abstracts
 
Travel and Accommodations
 
Restaurants
 
Campus Map
 
Registration
 
Reimbursement forms
 
Group photo, Thursday, May 25
 
Local Attractions
Las Cruces
Organ Mountains
White Sands

Final program

All talks will be held in Science Hall 107 unless otherwise noted.

Monday, May 22th 2016
  • 9am - Conference opening
  • 9:30am-10:30am Anna Gilbert, Lecture 1
  • 10:30am-11am Coffee Break
  • 11am-Noon, Deanna Needell, Lattices from equiangular tight frames with applications to lattice sparse recovery
  • 2pm-3pm Anna Gilbert, Lecture 2
  • 3pm Coffee Break
  • 5:30-7:00 - Reception at The Game 2605 S. Espina
Tuesday, May 23th 2016
  • 9:30am-10:30am Anna Gilbert, Lecture 3
  • 10:30am-11am Coffee Break
  • 11am-Noon, Mark Davenport, Sparse approximation of continuous-time signals
  • 2pm-3pm Anna Gilbert, Lecture 2
  • 3pm Coffee Break
Wednesday, May 24th 2016
  • 9:30am-10:30am Anna Gilbert, Lecture 5
  • 10:30am-11am Coffee Break
  • 11am-Noon, Rachel Ward, Low-rank matrix completion from few samples: recent developments
  • 2pm-3pm Anna Gilbert, Lecture 6
  • 3pm Coffee Break
Thursday, May 25th 2016
  • 9:30am-10:30am Anna Gilbert, Lecture 7
  • 10:30am-11am Group Photo and Coffee Break
  • 11 am-Noon Anna Gilbert, Lecture 8
Friday, May 26th 2016
  • 9:30am-10:30 am Anna Gilbert, Lecture 9
  • 10:30am-11:00 am Coffee Break
  • 11:00am-12:00am Anna Gilbert, Lecture 10
  • The End

  • Deanna Needell, Claremont-McKenna College,
    Title: Lattices from equiangular tight frames with applications to lattice sparse recovery 
    Abstract: It is now well known that one can efficiently recover a sparse signal from a small number of linear measurements. However, typically one not only knows the signal has a sparse representation in some basis, but the signal may also possess some other type of structure. Model-based sparse recovery addresses additional structures like block sparsity, spread sparsity, or when the signal is known to reside in some other convex space. However, if the signal is also known to lie in some lattice, very little is known about how this additional structure can be successfully utilized. Motivated by this problem, we introduce some theory from Algebra and provide results describing when the set of integral linear combinations of atoms from an equiangular tight frame form a lattice. Such an understanding should lead to designs of sampling operators for sparse lattice signals. The talk concludes with open questions and future directions. 

  • Mark Davenport, Georgia Institute of Technology,
    Title: Sparse approximation of continuous-time signals  
    Abstract: While compressive sensing is often motivated as an alternative to Nyquist-rate sampling, there remains a gap between the discrete, finite-dimensional compressive sensing framework and the problem of acquiring a continuous-time signal. In this talk I will discuss one approach to bridging this gap by exploiting the discrete prolate spheroidal sequences (DPSS's), a collection of functions that provide an efficient representation for discrete signals that are perfectly timelimited and nearly bandlimited. By modulating and merging the DPSS basis -- also known as the Slepian basis -- we obtain a dictionary that offers high-quality sparse approximations for most sampled multiband signals. This dictionary can be readily exploited by standard sparse recovery algorithms. Unfortunately, due to the high computational complexity of projecting onto the Slepian basis, this representation is often overlooked in favor of the fast Fourier transform (FFT). In this talk I will also describe novel fast algorithms for computing approximate projections onto the leading Slepian basis elements with a complexity comparable to the FFT. 

  • Rachel Ward, The University of Texas at Austin,
    Title: Low-rank matrix completion from few samples: recent developments  
    Abstract: Consider the matrix completion problem: Given only a subset of the entries of a matrix, recover the remaining ones. This is impossible unless we assume additional information about the matrix, such as it having low rank. In this talk, we discuss recent results on the low-rank matrix completion problem: how many entries do we need to see to complete a matrix of rank r? How should such entries be sampled in terms of some given structure of the low-rank matrix? Can it help to sample entries adaptively? We will see how these questions are related to matrix leverage scores and matrix large deviations inequalities. 

    (Last updated on May 25, 2017)