All lecture locations are listed on p. 4 of the first set of slides.

Course announcements will be posted on the **mailing list**.

This page will contain slides and detailed notes for the kernel part of the course. The assignment may also be found here (at the bottom of the page). Note that the slides will be updated as the course progresses, and I modify them to answer questions I get in the classes. I'll put the date of last update next to each document - be sure to get the latest one. Let me know if you find errors.

There are sets of practice exercises and solutions further down the page (after the slides).

See David Silver's **page** for the reinforcement learning part of the course.

- Definition of a kernel, how it relates to a feature space
- Combining kernels to make new kernels
- The reproducing kernel Hilbert space
- Applications: difference in means, kernel PCA, kernel ridge regression

Lectures 4, 5, 6, and 7 **slides** and **notes**, last modified 23 Feb 2016

- Distance between means in RKHS, integral probability metrics, the maximum mean discrepancy (MMD), two-sample tests
- Choice of kernels for distinguishing distributions, characteristic kernels
- Covariance operator in RKHS: proof of existence, definition of norms (including HSIC, the Hilbert-Schmidt independence criterion)
- Application of HSIC to independence testing
- Application of HSIC to feature selection, taxonomy discovery.
- Introduction to independent component analysis, kernel ICA

Lecture 8 **slides** and **notes**, last modified 26 Jan 2016

- Introduction to convex optimization
- The representer theorem
- Large margin classification, support vector machines for clasification

Lecture 9 **slides**, lecture 10 **slides** , and **notes**, last modified 20 Mar 2013

- Metric, normed, and unitary spaces, Cauchy sequences and completion, Banach and Hilbert spaces
- Bounded linear operators and the Riesz Theorem
- Equivalent notions of an RKHS: existence of reproducing kernel, boundedness of the evaluation operator
- Positive definiteness of reproducing kernels, the Moore-Aronszajn Theorem
- Mercer's Theorem for representing kernels

Supplementary lecture **slides**, last modified 22 Mar 2012

- Loss and risk, estimation and approximation error, a new interpretation of MMD
- Why use an RKHS: comparison with other function classes (Lipschitz and bounded Lipschitz)
- Characteristic kernels and universal kernels

Dates:

- Free schedule

Included in selections:

Deep Learning

Good materials on deep learning.

Good materials on deep learning.

Machine Learning

Machine learning: from the basics to advanced topics. Includes statistics...

Machine learning: from the basics to advanced topics. Includes statistics...

More from 'Computer Science':

iLabX – The Internet Masterclass

You want to know how the Internet works? You want to fully understand its mechanisms...

You want to know how the Internet works? You want to fully understand its mechanisms...

Autonomous Mobile Robots

Basic concepts and algorithms for locomotion, perception, and intelligent navigation...

Basic concepts and algorithms for locomotion, perception, and intelligent navigation...

HTML5 Coding Essentials and Best Practices

Learn how to write Web pages and Web sites by mastering HTML5 coding techniques...

Learn how to write Web pages and Web sites by mastering HTML5 coding techniques...

Databases: OLAP and Recursion

The On-Line Analytical Processing section of this course introduces star schemas...

The On-Line Analytical Processing section of this course introduces star schemas...

Databases: Semistructured Data

This course includes the following components: XML Data; JSON Data; XPath and...

This course includes the following components: XML Data; JSON Data; XPath and...

© 2013-2019