Tags

,

This post is courtesy of Pierre Borgnat, a huge thanks !

Vandergheynst

And what about Pierre Vandergheynst’s lectures ? Spread in three sessions, Pierre discussed about the recent advances in Harmonic Analysis on Graphs an Networks. The plan of the lectures was already given on the blog and I will cover a few salient points. In short, this was a marvellous series of lectures, especially given the necessary flexibility that Pierre displayed, when continuing his lecture on the black board due an electricity outage — then improvise a follow-up with both blackboard and a slides due to popular insistence!

First session on the second day, Pierre covered all basics of Spectral Graph Theory and all participants had a refresh about Laplacian matrices, spectrum of a graph, spectral cuts, spectral clustering and graph embedding using Laplacian eigenmaps. A nice example was shown for matching points clouds of a warped surface, for instance the shape of a map in point clouds — unfortunately, this would not work well if someone with his hand on the top of his headTake that silhoutte matching algorithm
Then, attacking the main core of the lectures, Pierre defined operators to study signals indexed on graphs: gradients, regularity, total variation. A central point is the intriguing and important notion coherence of a graph (the largest absolute value of the components of the eigenvectors of the Laplacian) — more on that later on.
Using functional calculus on graphs, it’s possible to define operators through the spectral domain so as, for instance, solve diffusion equation on a graph. The good thing is that it is now possible to give meaning to what is a filtering of graph signals (with examples of denoising or recommendation system). End of the first session.

Third day, second lecture: no electricity… And so what ? Pierre improvised on the blackboard. In the lectures, he showed that it is possible to define first operators to localise a kernel on a graphs and then how it can be interpreted as a translation so as to propose a generalisation of the convolution operator. Still, the “agonizing limit of intuition” limits some of the properties: the translation operator is not unitary, when its coherence departs from 1/\sqrt{N}. That might problematic in specific situations, or will that be an asset to study the local topology of the graphs? (we would be salved when studying graphs on manifolds or accepting a group of translation — these are developments not covered here). With all that, it was direct to define the spectral graph wavelets that provide a continuous and invertible transform using function localized both in frequency and vertices. The good points is that it’s possible to obtain a frame, even a tight frame; that they have nice localisation properties; and finally that implementation can be efficient using Chebyshev polynomials (“love Chebyshev !”). Better, a python toolbox is soon enough available… (the SGWT toolbox in matlab already exists).

Last day at Peyresq: two more hours par Pierre in which we learned how to put coarsen both signals and graphs by putting wavelets in a pyramidal algorithm. Fist ingredient: downsample the graphs using the nodal domains of the (or one of the) eigenvectors with largest eigenvalues of the Laplacian. Second ingredient: filter the signals (easy, remember that we learnt about graph filters and wavelets). Third ingredient: coarsen the graph y keeping only half of the nodal domains, and transform the graph using the Kron reduction (that should activate your memories of basic electricity). A major property is that the Kron reduction algebraically written as the Schur complement of a matrix) preserves properties of the Laplacian and it is useful to compute spline-like interpolation on a signal. This allows us to see the difference between the filtered signal retained on the coarsened graph and the original signal on the original graph (having all the vertices). With all that, you have a sound Laplacian pyramid to filter and coarsen graph signals. To make all that more efficient, the sparsification of the coarsened graphs is useful, using the work of Spielman and Srivastava (second side message of the lecture: once you now all about Chebyshev polynomials, read this paper in SIAM J. Comp. of 2011). The conclusion is that many applications about filtering on graphs, learning on graphs, visualisation of graphs, and so on, just wait in our reach. At the end of the course, it was a very motivating series of lectures.

What was skipped in the lectures? Materials were given in the slides about manifold learning with graph spectral theory or vertex-frequency analysis (in the spirit of the time-frequency analysis or Gagor transform — yet on graphs). I advise all of you to read the nice paper (on ArXiv) by David Shuman, Benjamin Ricaud and P. V. on the latter subject, and to discuss with Pierre about the former one.

Advertisements