The colored paths are labeled training data, just like in my last post on this.

The title gives the "answer" for a test video:

Could you tell from just this what the sequence of gestures was?

Not perfectly, but way better than chance.

See a couple more examples by clicking:

I'm sure that a prediction method based only on these principal components isn't the best way to go (is it ever? except for reducing the size of your problem for computational purposes), but I'd like to try -- then at least I can visualize my algorithm very nicely and see where it's failing.

And a few more (first ten from "devel01" test data) here (pdf, not an animation).

## Monday, December 19, 2011

## Sunday, December 18, 2011

### speaking of machine learning

Umm, not exactly:

And "Albanian" doesn't come first ("Africaans") does, so it's not even that kind of bug.

## Saturday, December 17, 2011

### Defining Churn Rate--is instantaneous better?

I enjoyed this post on defining "churn rate" at the Shopify tech blog, though I don't remember at all how it ended up in my reading list.

The author goes through ways you might define define "churn rate", showing that each of them could be misleading.

Like a good ex-calculus teacher (from one year in grad school), I wondered: Isn't any "rate" more naturally understood as an instantaneous quantity? Doesn't the problem come from the fact that he's trying to understand churn over an interval first?

To illustrate, here's my oversimplified model:

The author goes through ways you might define define "churn rate", showing that each of them could be misleading.

Like a good ex-calculus teacher (from one year in grad school), I wondered: Isn't any "rate" more naturally understood as an instantaneous quantity? Doesn't the problem come from the fact that he's trying to understand churn over an interval first?

To illustrate, here's my oversimplified model:

- You start with some number of customers (
*initial_customers*) at time*t=1*. - Each day you gain new customers at some number of new customers, drawn from a Poisson distribution with expected value
*expected new customer rate*. - Each day you lose some proportion of your customers, drawn from a binomial distribution where the expected value of the proportion you lose is your
*daily churn rate*.

I suspect that everything I say will apply also to more complicated/realistic models.

Here's a silly simulation (here's code for the simulation) of this model, where I've determined the expected churn rate (flat, then slopes up) and expected new customer rate (flat).

Churn rates:

Daily number of customers:

Assuming your number of customers doesn't change too much in a day, the

Here's a silly simulation (here's code for the simulation) of this model, where I've determined the expected churn rate (flat, then slopes up) and expected new customer rate (flat).

Churn rates:

Daily number of customers:

Assuming your number of customers doesn't change too much in a day, the

*daily churn rate*is almost like an instantaneous rate.So, I have two notions of daily churn rate:

The definition he ended up with is an appropriate weighted average of daily churn rate and he must have been thinking of this.

Why take an average over a period? Because actual churn rates are

But once we're thinking in these terms, aren't there all sorts of standard time series methods for helping us model (and even project) churn rates?

In other words: Averaging isn't the only way to separate the "noise (randomness in churn) from the "signal" (expected value of churn).

I'd love to hear about anything I'm missing here. I've never thought about this before, so forgive me if I'm very confused.

Why take an average over a period? Because actual churn rates are

*noisy*, so averaging is one way to smooth out that noise.But once we're thinking in these terms, aren't there all sorts of standard time series methods for helping us model (and even project) churn rates?

In other words: Averaging isn't the only way to separate the "noise (randomness in churn) from the "signal" (expected value of churn).

I'd love to hear about anything I'm missing here. I've never thought about this before, so forgive me if I'm very confused.

## Sunday, December 11, 2011

### Visualizing Gestures as Paths

Kaggle is hosting an exciting new competition in which the object is to learn to identify sequences of gestures from just one example of each gesture. I would bet this competition has a lot of potential to attract academics interested in machine learning.

The competition comes with sample code for importing the datasets (AVI videos) to MATLAB, but right now I don't have MATLAB (although a recent post from one of my favorite bloggers reminded me to obtain it even before this annoyance).

The other tools I've used for data analysis are R and Octave (a free program similar to MATLAB). The best option I found for importing the data was Octave's 'video' package (see below the fold for installation tips). Please let me know if you find other possibilities!

The data come in batches, so I imported the first batch of data, saved it, and loaded it in R. When imported the data, I also shrunk each image, for two reasons:

Then treating each frame (image) as one "row" of data, plotted the first two principal components (of the training data only) as a 'quick and dirty' way to visualize the data. In this plot, the points represent frames, and the colors encode which video each frame came from:

Observations:

The competition comes with sample code for importing the datasets (AVI videos) to MATLAB, but right now I don't have MATLAB (although a recent post from one of my favorite bloggers reminded me to obtain it even before this annoyance).

The other tools I've used for data analysis are R and Octave (a free program similar to MATLAB). The best option I found for importing the data was Octave's 'video' package (see below the fold for installation tips). Please let me know if you find other possibilities!

The data come in batches, so I imported the first batch of data, saved it, and loaded it in R. When imported the data, I also shrunk each image, for two reasons:

- Smaller dataset is easier to deal with (shrinking each dimension by a factor of 3 shrinks the final dataset by a factor of 9).
- I also hoped that blurring the fine distinctions of a larger image might cause each video to trace out more of a continuous path.

Then treating each frame (image) as one "row" of data, plotted the first two principal components (of the training data only) as a 'quick and dirty' way to visualize the data. In this plot, the points represent frames, and the colors encode which video each frame came from:

Observations:

- I was surprised the paths don't trace out paths that are more continuous.
- Although they do trace out somewhat continuous paths
- Each gesture/video traces out a somewhat distinctive path
- Most gestures/videos begin and in roughly the same region (this makes sense -- each video seems to begin and end with the person in roughly the same position).

- Probably it would make more sense to include the "test" data in my PCA.
- The real question is how much the appropriate segment of each path in the "test" data resemble the corresponding path in the training data.
- I want to visualize the data (and do my learning) with an embedding/transformation that makes more sense than PCA. Presumably there is some structure in the set of all images, and a method like Laplacian Eigenmaps or ISOMAP will presumably do a better job taking advantage of that.

**All of my code for this is available on Github.**
Subscribe to:
Posts (Atom)