Friday, September 21, 2007

Manifold learning and Geometry

In June, Partha Niyogi gave the invited lecture at SoCG. The subject was manifold learning:
Given a collection of (unlabelled) data inhabiting some high dimensional space, can you determine whether they actually lie on some lower dimensional manifold in this space ?
This talk was a great window into an area of machine learning with strong geometric connections, an area where geometers could profitably play and make useful contributions.

Now NIPS (one of the major machine learning conferences) is returning the favor, with a workshop devoted to the topic of Topology Learning:

There is a growing interest in Machine Learning, in applying geometrical and topological tools to high-dimensional data analysis and processing.

Considering a finite set of points in a high-dimensional space, the approaches developed in the field of Topology Learning intend to learn, explore and exploit the topology of the shapes (topological invariants such as the connectedness, the intrinsic dimension or the Betti numbers), manifolds or not, from which these points are supposed to be drawn.

Applications likely to benefit from these topological characteristics have been identified in the field of Exploratory Data Analysis, Pattern Recognition, Process Control, Semi-Supervised Learning, Manifold Learning and Clustering.

However it appears that the integration in the Machine Learning and Statistics frameworks of the problems we are faced with in Topology Learning, is still in its infancy. So we wish this workshop to ignite cross-fertilization between Machine Learning, Computational Geometry and Topology, likely to benefit to all of them by leading to new approaches, deeper understanding, and stronger theoretical results about the problems carried by Topology Learning.

The list of invited speakers bridges geometry, topology and learning. It looks like a great forum for continuing the machine learning-computational geometry cross-talk that Partha kicked off at SoCG.

3 comments:

  1. Here's some other kinds of rough connections between learning and geometry, well known to those who know them. Surely there's more.

    What in learning is called "leave one out", or "deletion", in geometry is called "backwards analysis"

    What in learning is called "VC-dimension", in geometry is called, well, "VC-dimension"

    What in learning is called "sample compression" using maybe "combinatorial dimension", in geometry is called "random sampling"

    What in learning is called "sparse greedy approximation" in geometry is called "the Frank-Wolfe algorithm" and "coresets"

    What in learning is called "winnow" in geometry is sometimes called "iterative reweighting"

    ...plus of course robust estimators, k-means, doubling dimension, etc.

    -Ken

    (Apologies if this comment is given twice)

    ReplyDelete
  2. excellent. we need a translator for machine learning and geometry :)

    ReplyDelete
  3. It sure looks like there is plenty of overlap especially in light of the recent development in Compressed Sensing.
    I wrote something about it at:
    http://hunch.net/?p=273
    and attendant comments. I am also intrigued with the connection to the nuclear norm:
    http://arxiv.org/abs/0706.4138
    and the diffusion method on manifolds of Lafon, Maggioni and Coifman.


    Igor
    http://nuit-blanche.blogspot.com

    ReplyDelete

Disqus for The Geomblog