Lecture 15, Feb. 26th, 2015: Sparse coding and the manifold

In this lecture we will continue our discuss of unsupervised learning methods. We will study sparse coding and the manifold interpretation of autoencoders.

Please study the following material in preparation for the class:

  • Lecture 8 (8.1 to 8.9) of Hugo Larochelle’s course on Neural Networks.
  • Chapter 13 of the Deep Learning Textbook.

Other relevant material:

 

Advertisements

9 thoughts on “Lecture 15, Feb. 26th, 2015: Sparse coding and the manifold”

  1. How does one apply a sparse coding model during testing time or when uses the code in a classification algorithm? It seems to be a costly operation to infer the code every time. Especially if we would like to use random patches of the image as the input to the classifier and cannot precompute sparse codes.

    Like

  2. In section 13.1 of the book at the end of page 269 it is mentioned that
    “1. Learning a representation h of training examples x such that x can be approximately recovered from h through a decoder. Note that this needs not be true for any x, only for those that are probable under the data generating distribution. “
    What does “probable under the data generating distribution” mean? Is it that a single auto-encoder can’t learn the representations of different data coming from submanifolds that are “far appart”? What happens if for instance two images are visually alike but issued from two very distant manifolds? Is this possible?

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s