Sunday, August 5, 2012

Random forests for visualizing data

Recently I read about using random forests as a way to visualize data. Here's how it works:
  1. Have a data set
  2. Create a set of fake data by permuting the columns randomly -- each column will still have the same distribution, but the relationships are destroyed.
  3. Train a random forest to distinguish the fake data from the original data.
  4. Get a "proximity" measure between points, based on how often the points are in the same leaf node.
  5. Embed the points in 2D in such a way as to distort these proximities as little as possible.
I decided to try this in a case where I would know what the outcome should be, as a way of thinking about how it works. So I generated 931 images of diamonds varying in two dimensions:
  1. Size
  2. Position (only how far left/right)

Then I followed the above procedure, getting this:

Neat! The random forest even picked up on a feature of this space that I wasn't expecting it to: for the same difference in position, small diamonds need to be closer to each other than large diamonds. None of my diamonds have a diameter smaller than 4 pixels, but imagine of the sizes got so small the diamond wasn't even there -- then position wouldn't matter at all of those diamonds.

I set one column of pixels to random values, and the method still worked just as well. (Which makes sense, as the random forest only cares about pixels that help it distinguish between diamonds and non-diamonds.)

A cool technique that I'd love to try some more! For one, I'd like to understand better how it differs from various manifold learning methods. One nice feature is that you could easily use this with a mix of continuous and categorical variables.

Note that starting with Euclidean distance between images (as vectors in R^2500) and mapping points to 2D doesn't seem to produce anything useful:

Code available on github.


  1. What method did you use for creating the embedding?

  2. Kruskal's Non-metric Multidimensional Scaling --

  3. Multidimensional scaling is typically a very bad way of visualizing high-dimensional data. I think one of the best algorithms for visualizations like these is t-SNE (

  4. That method is great! Thanks.

  5. I also use visual materials for my papersowl review, they always help more to render the idea of your information.

  6. it is an interesting Article, i read this post that was very nice and useful thanks for sharing it.
    Data Science training in Chennai | Data Science Training Institute in Chennai | Data Science Course in Chennai

  7. Superb. I really enjoyed very much with this article here. Really it is an amazing article I had ever read. I hope it will help a lot for all. Thank you so much for this amazing posts and please keep update like this excellent article.thank you for sharing such a great blog with us. expecting for your.
    Java Training in Chennai
    Java Training in Coimbatore
    Java Training in Bangalore

  8. I am glad that I saw this post. It is informative blog for us and we need this type of blog thanks for share this blog, Keep posting such instructional blogs and I am looking forward for your future posts.
    Cyber Security Projects for Final Year

    JavaScript Training in Chennai

    Project Centers in Chennai

    JavaScript Training in Chennai