Leaderboard for the class project

You are encouraged to publicize your results on the class project through the newly-created leaderboard wiki, which you can find here.

The wiki is editable by anybody with a Github account, so feel free to add your results by yourself. Here are the instructions, taken from the wiki:

Use this wiki to publicize your results on the Dogs vs. Cats class project.

Every time you get a better result on the challenge, you can insert an entry to the list below (at the appropriate place please, so that the list stays sorted by test error rate). Make sure to include a link to a blog post detailing how you achieved that result. The format for entries is as follows:

<test error rate> (<train error rate>, <valid error rate>): <short description> (<link to blog post>)
Advertisements

6 thoughts on “Leaderboard for the class project”

  1. Vincent, is there a way to get more detailed performance results from Pylearn2 (such as confusion matrix) ?
    I am pretty sure that it could significantly help to improve our algorithm for this (or any other future project)

    Like

  2. Not that I’m aware of.

    It’s not that difficult to create your own script that does that, though (provided you’re already familiar enough with Pylearn2 and Theano). I just pushed two scripts on my IFT6266H15 repo which compute error rates and confusion matrices respectively:

    https://github.com/vdumoulin/ift6266h15/blob/master/code/pylearn2/scripts/dogs_vs_cats_error.py

    and

    https://github.com/vdumoulin/ift6266h15/blob/master/code/pylearn2/scripts/dogs_vs_cats_confusion.py

    You can use these scripts as a starting point to compute fancier performance metrics.

    Liked by 2 people

  3. Vincent,

    Is there a way to predict how much GPU memory a certain network configuration should take using Pylearn2 (before getting the actual allocation error)?

    Thanks,
    Maor.

    Like

    1. Not that I’m aware of. I usually try whatever hyperparameters I chose and keep decreasing the batch size until the allocation error disappears.

      You could count how many parameters you have and multiply that by either 32 or 64 bits (depending on the value of floatX you set) to get a rough estimate, but additional memory is used to store intermediary results, so you should treat that as a lower bound.

      Liked by 1 person

      1. Thank you very much again. I think this could make a great feature/tool for pylearn2, as we are always constrained to fit in the gpu’s memory.

        Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s