Seminars

Deriving Structure of Images via Dictionary and Bayesian Network Learning

97
reads

Wen-Liang Hwang

2012-10-26
12:30:00 - 14:30:00

103 , Mathematics Research Center Building (ori. New Math. Bldg.)

The most important descriptor of an image is its structure. Image processing researchers have developed several methods to derive the low-dimensional structure of images; examples include using the Fourier transform to represent oscillatory components in images, a wavelet transform to represent piecewise smooth images, and a pre-defined dictionary for sparse representation. Such approaches have achieved a certain degree of success in deriving image structures and solving low-level problems, such as compression, and restoration; however, there is now a growing trend towards data-driven approaches that exploit data-adaptive algorithms to retrieve image structures. In this talk, I will present two data-adaptive methods: one learns an image’s structure via dictionary adaptation, and the other learns a Bayesian network from transform coefficients. I will show that the state-of-the-art K-SVD dictionary learning algorithm can be improved by using the proximal-point method; and by exploring the structures of wavelet coefficients, I will show that the Bayesian network approach outperforms the state-of-the-art BM3D denoising algorithm, particularly on texture images.