tag:blogger.com,1999:blog-6555947.post1965132739542651551..comments2023-06-03T06:21:51.851-06:00Comments on The Geomblog: SoCG 2007: Geometric Views of LearningSuresh Venkatasubramanianhttp://www.blogger.com/profile/15898357513326041822noreply@blogger.comBlogger3125tag:blogger.com,1999:blog-6555947.post-87875961030119175122008-10-31T22:39:00.000-06:002008-10-31T22:39:00.000-06:00I'd view these as fundamentally different operatio...I'd view these as fundamentally different operations. In the Laplacian case, you're hoping that the data (as presented to you) was from a well-behaved sampling process. In the case of JL-type transformations, you construct a random transform (via Gaussians, or orthogonal transforms, or whatever)Suresh Venkatasubramanianhttps://www.blogger.com/profile/15898357513326041822noreply@blogger.comtag:blogger.com,1999:blog-6555947.post-81054871030643694852008-10-30T06:28:00.000-06:002008-10-30T06:28:00.000-06:00"This assumes that that the data was sampled unifo..."This assumes that that the data was sampled uniformly (or mostly uniformly) from the manifold."<BR/><BR/>In this context, is uniform sampling the same as random sampling?<BR/>I'm not very familiar with manifold learning methods -- from what I have read, linear random projections from a high dimensional space to a sub-manifold preserve distance and angle (Johnson-Lindenstrauss lemma). The Laplacian in a non-linear function. Is there something similar to the JL lemma for non-linear embeddings?Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-6555947.post-17291024684173808622007-06-06T23:41:00.000-06:002007-06-06T23:41:00.000-06:00For a more general extension of the idea check out...For a more general extension of the idea check out <A HREF="http://www.math.yale.edu/~sl349/tutorials.html" REL="nofollow">Lafon's</A> and <A HREF="http://www.math.duke.edu/~mauro/" REL="nofollow">Maggioni's</A> (among others) work on Diffusion maps and diffusion geometry.Anonymousnoreply@blogger.com