Monday, December 19, 2011

Visualizing ChaLearn Gestures Test Data

The colored paths are labeled training data, just like in my last post on this.

The title gives the "answer" for a test video:


Could you tell from just this what the sequence of gestures was?

Not perfectly, but way better than chance.

See a couple more examples by clicking:


I'm sure that a prediction method based only on these principal components isn't the best way to go (is it ever? except for reducing the size of your problem for computational purposes), but I'd like to try -- then at least I can visualize my algorithm very nicely and see where it's failing.

And a few more (first ten from "devel01" test data) here (pdf, not an animation).

4 comments:

  1. You should also try 3d plots.

    I'm guessing these are generated by R.

    ReplyDelete
  2. Yep, that's definitely a good thought/plan.

    And yep, R. You've got it! Ggplot2 is pretty recognizable in these.

    The code is actually all available (at the Github repository linked in the previous post), although it's a bit of a mess. (I apologize.)

    ReplyDelete
  3. Some questions...

    (1) Why are you using PCA? Andre Ng, of ml-class.org fame, suggests that it is better to tackle the problem without PCA and see if it can be done that way.

    (2) Have you thought about what approach to take? As a starter, I'm thinking about doing a k-nearest algorithm by segmenting the video into different gestures. This is the simplest approach and should serve as a baseline for more sophisticated approaches.

    ReplyDelete
  4. (1) I'm not really using PCA as a means of regularization as he
    advises against. I have a lot of fun visualizing data (and I do think
    it helps), so I just wanted a means of mapping the frames to a lower
    dimension so I could plot them. Maybe it will help solve the problem,
    and maybe it won't.

    (2) That makes sense. But you could also match subsequences of frames
    directly via DTW (the R package looks nice --
    http://dtw.r-forge.r-project.org/) as Isabelle suggests in the forum
    (http://www.kaggle.com/c/GestureChallenge/forums/t/1150/getting-started).
    I'm not sure what I'll try first, but I think the more direct approach
    is more intuitive and will probably work better.

    ReplyDelete