Seminars

Online Learning with Matrix Exponentiated Gradient Updates

191
reads

Hsin-Hsiung Huang

2008-10-03
13:30:00 - 15:00:00

Online Learning with Matrix Exponentiated Gradient Updates

404 , Freshman Classroom Building

Tsuda, Ratsch and Warmuth (2006) address the problem of learning a symmetric positive definite matrix. They offer kernelized updates which involve a calculation based on matrix logs and matrix exponentials. These updates preserve the symmetry and positive definiteness. On the other hand, Vishwanathan, Schraudolph and Smola (2006) provide an online support vector machine (SVM) that uses the stochastic meta-descent (SMD) algorithm to adapt its step size automatically. Based on their method, we derive updates that allow us to perform the step size adaptation of kernel principal component analysis (PCA). Further, the online kernel PCA is an online SVM framework to loss functions, where its gradient trace parameter is no longer a coefficient vector but an element of the RKHS.