{"id":4,"date":"2017-04-05T11:45:15","date_gmt":"2017-04-05T11:45:15","guid":{"rendered":"http:\/\/blog.metu.edu.tr\/velif\/?page_id=4"},"modified":"2025-01-06T07:42:42","modified_gmt":"2025-01-06T07:42:42","slug":"research","status":"publish","type":"page","link":"https:\/\/blog.metu.edu.tr\/velif\/research\/","title":{"rendered":"Research"},"content":{"rendered":"<h2><strong>Estimation of Time-Varying Graph Signals from Incomplete Observations<\/strong><\/h2>\n<p>Many applications nowadays involve the acquisition of data over certain time intervals on platforms such as social networks, communication networks and irregular sensor arrays. Such data types can be modeled as time-varying graph signals. In many practical applications, it is not possible to observe graph signals as a complete data set with no missing samples; hence the need for estimating the unavailable observations of the graph signal using the available ones arises as an important need. Among many applications involving time-varying graph signals, some examples are the prediction of how an epidemic will evolve in a society, the estimation of lost data in a sensor network due to sensor failure or communication problems, the prediction of the tendencies of individuals in a social network such as their consumption habits. The purpose of this project is to develop novel methods for the modeling and estimation of time-varying graph signals. (<a href=\"https:\/\/blog.metu.edu.tr\/velif\/research\/estimation-of-time-varying-graph-signals-from-incomplete-observations-tubitak-1001-project\/\">View project page<\/a>)<\/p>\n<h2>Domain Adaptation on Graphs<\/h2>\n<div class=\"page\" title=\"Page 1\">\n<div class=\"section\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<p>Most classification algorithms rely on the assumption that the labeled and unlabeled data samples at hand are drawn from the same distribution. However, in many practical data classification problems, the labeled training samples and the unlabeled test samples may have different statistics. Domain adaptation methods make use of the class labels sufficiently available in a source domain in order to infer the label information in a target domain where labeled data are much more scarce. In order to be able to \u201ctransfer\u201d the information from one domain to another, some inherent relation must exist between the two domains.<\/p>\n<\/div>\n<div class=\"page\" title=\"Page 1\">\n<div class=\"section\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<p>We focus on settings where the source and the target data are represented with a graph in each domain. We consider that the source and the target graphs are related in such a way that the spectra of the source and the target class label functions on the two graphs share similar characteristics.<\/p>\n<\/div>\n<p>The idea in <a href=\"https:\/\/arxiv.org\/abs\/1803.05288\">this work<\/a> is to learn the spectrum of the label function in a source graph with many labeled nodes, and to transfer it to the spectrum of the target graph with fewer labeled nodes. A generic face manifold is illustrated in the figure below. Face images of three different individuals are indicated with different colors. While the class label function varies slowly along the blue direction, it has a relatively fast variation along the red direction.<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-26 size-medium aligncenter\" src=\"https:\/\/blog.metu.edu.tr\/velif\/files\/2017\/04\/label_function-300x227.png\" alt=\"\" width=\"300\" height=\"227\" srcset=\"https:\/\/blog.metu.edu.tr\/velif\/files\/2017\/04\/label_function-300x227.png 300w, https:\/\/blog.metu.edu.tr\/velif\/files\/2017\/04\/label_function-768x582.png 768w, https:\/\/blog.metu.edu.tr\/velif\/files\/2017\/04\/label_function.png 836w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/><\/p>\n<p>The graph domain adaptation problem we study is illustrated below. Given that the source label function has slow and fast variations along the indicated directions, the idea in <a href=\"https:\/\/arxiv.org\/abs\/1803.05288\">our work<\/a>\u00a0is to transfer this label spectrum information to the target graph\u00a0 in order to estimate the target label function more accurately.<\/p>\n<h2><img loading=\"lazy\" decoding=\"async\" class=\" wp-image-66 aligncenter\" src=\"https:\/\/blog.metu.edu.tr\/velif\/files\/2020\/07\/illus_gda-300x73.png\" alt=\"\" width=\"497\" height=\"121\" srcset=\"https:\/\/blog.metu.edu.tr\/velif\/files\/2020\/07\/illus_gda-300x73.png 300w, https:\/\/blog.metu.edu.tr\/velif\/files\/2020\/07\/illus_gda-1024x249.png 1024w, https:\/\/blog.metu.edu.tr\/velif\/files\/2020\/07\/illus_gda-768x187.png 768w, https:\/\/blog.metu.edu.tr\/velif\/files\/2020\/07\/illus_gda-1536x374.png 1536w, https:\/\/blog.metu.edu.tr\/velif\/files\/2020\/07\/illus_gda.png 1746w\" sizes=\"auto, (max-width: 497px) 100vw, 497px\" \/><\/h2>\n<p>The transfer of information between two or more graphs can also be done in different ways, e.g. using localized bases for representing graph signals. In <a href=\"https:\/\/arxiv.org\/abs\/1911.02883\">this study<\/a>, we represent label functions in terms of <a href=\"https:\/\/www.sciencedirect.com\/science\/article\/pii\/S1063520310000552\">Spectral Graph Wavelets<\/a> (proposed by Hammond et al.) and transmit the wavelet coefficients of the label function between two graphs. This is illustrated in the figure below.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-60 aligncenter\" src=\"https:\/\/blog.metu.edu.tr\/velif\/files\/2020\/07\/illus_da_alg-226x300.png\" alt=\"\" width=\"479\" height=\"636\" srcset=\"https:\/\/blog.metu.edu.tr\/velif\/files\/2020\/07\/illus_da_alg-226x300.png 226w, https:\/\/blog.metu.edu.tr\/velif\/files\/2020\/07\/illus_da_alg-772x1024.png 772w, https:\/\/blog.metu.edu.tr\/velif\/files\/2020\/07\/illus_da_alg-768x1019.png 768w, https:\/\/blog.metu.edu.tr\/velif\/files\/2020\/07\/illus_da_alg-1158x1536.png 1158w, https:\/\/blog.metu.edu.tr\/velif\/files\/2020\/07\/illus_da_alg.png 1425w\" sizes=\"auto, (max-width: 479px) 100vw, 479px\" \/><\/p>\n<div class=\"page\" title=\"Page 2\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<p>In order to get the best of graph domain adaptation methods, it is important to theoretically understand their performance limits. The classification performance significantly depends on the structures of the source and the target graphs and the similarity between them. In particular, in problems where the\u00a0 graphs are constructed from data samples coming from data manifolds, the properties of the graphs such as the locations and the weights of the edges, and the number of neighbors of graph nodes largely influence the performance of learning.<\/p>\n<p>In <a href=\"https:\/\/arxiv.org\/pdf\/1812.06944.pdf\">this work<\/a>, we <span style=\"font-size: inherit\">propose a theoretical characterization of the<\/span><span style=\"font-size: inherit\"> performance of classification on<\/span><span style=\"font-size: inherit\"> the target graph in a graph domain adaptation setting. <\/span><span style=\"font-size: inherit\">Our theoretical analysis suggests that very sparse graphs with too few edges and very dense<\/span><span style=\"font-size: inherit\">\u00a0 graphs with too many edges should be avoided, as well as too small edge weights. The smoothness of the label<\/span><span style=\"font-size: inherit\"> function is shown to positively influence the performance of learning. We then<\/span><span style=\"font-size: inherit\"> use these theoretical findings to propose a graph domain adaptation algorithm that jointly estimates the class <\/span><span style=\"font-size: inherit\">labels of the source and the target data and the topologies of the source and the target graphs.<\/span><\/p>\n<\/div>\n<\/div>\n<\/div>\n<h2>Dimensionality Reduction<\/h2>\n<p>Traditional algorithms such as PCA and LDA learn a linear projection that reduces the dimensionality of data while capturing its main and discriminative properties. During the last two decades, many studies have aimed to take into account the nonlinear but low-dimensional intrinsic structure of data in order to learn efficient models and classifiers, which are called manifold learning methods. Supervised manifold learning methods learn a mapping for better separability of different classes, as illustrated below.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-25 aligncenter\" src=\"https:\/\/blog.metu.edu.tr\/velif\/files\/2017\/04\/supervised_ml-300x117.png\" alt=\"\" width=\"374\" height=\"146\" srcset=\"https:\/\/blog.metu.edu.tr\/velif\/files\/2017\/04\/supervised_ml-300x117.png 300w, https:\/\/blog.metu.edu.tr\/velif\/files\/2017\/04\/supervised_ml.png 566w\" sizes=\"auto, (max-width: 374px) 100vw, 374px\" \/><\/p>\n<p>Supervised\u00a0manifold-learning methods often focus\u00a0on\u00a0learning an embedding for the training data that achieves\u00a0favourable separation between different classes. However, the extension\u00a0of these embeddings for initially unavailable samples\u00a0and\u00a0the generalization performance of these embeddings to new data samples have rather been overlooked. We\u00a0address these problems by proposing <a href=\"https:\/\/arxiv.org\/abs\/1507.05880\">generalization bounds for supervised manifold learning algorithms<\/a> and <a href=\"https:\/\/arxiv.org\/abs\/1502.02410\">semi-supervised out-of-sample extensions<\/a>.<\/p>\n<p>Our generalization bounds show that for successfully generalizing the learnt embedding to new samples, the Lipschitz regularity of the interpolation function <em>f<\/em> should be made sufficiently small, taking into account the separation between different classes and the preservation of the local geometry (illustrated in the figure below).\u00a0 \u00a0These theoretical results can then be used for improving the performance of supervised manifold learning algorithms. <span style=\"font-size: inherit\">Motivated by these theoretical findings, we propose <a href=\"https:\/\/arxiv.org\/abs\/1710.07120\">in this study<\/a> a supervised manifold learning method that computes a nonlinear embedding while constructing a smooth and regular interpolation function that extends the embedding to the whole data space in order to achieve satisfactory generalization. The embedding and the interpolator are jointly learnt such that the Lipschitz regularity of the interpolator is imposed while ensuring the separation between different classes.<\/span><\/p>\n<h2><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-63 aligncenter\" src=\"https:\/\/blog.metu.edu.tr\/velif\/files\/2020\/07\/illus_embedding-300x228.png\" alt=\"\" width=\"300\" height=\"228\" srcset=\"https:\/\/blog.metu.edu.tr\/velif\/files\/2020\/07\/illus_embedding-300x228.png 300w, https:\/\/blog.metu.edu.tr\/velif\/files\/2020\/07\/illus_embedding-1024x778.png 1024w, https:\/\/blog.metu.edu.tr\/velif\/files\/2020\/07\/illus_embedding-768x583.png 768w, https:\/\/blog.metu.edu.tr\/velif\/files\/2020\/07\/illus_embedding.png 1068w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/><\/h2>\n<h2>Multi-modal Learning<\/h2>\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<p>With the increasing accessibility of data acquisition and storing technologies, the need for successfully analyzing and interpreting multi-modal data collections has become more important. Many applications involve the acquirement or analysis of data collections available in multiple modalities. <span style=\"font-size: inherit\">In<a href=\"https:\/\/arxiv.org\/pdf\/2006.02330.pdf\"> this work<\/a>, we have studied the problem of learning supervised nonlinear representations for multi-modal classification and cross-modal retrieval applications.\u00a0<\/span><\/p>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<p>We have first\u00a0 theoretically analyzed the learning of multi-modal nonlinear embeddings in a supervised setting. <span style=\"font-size: inherit\">The studied multi-modal embedding setting is illustrated below. In this example, modalities <em>u<\/em> (image) and <em>v<\/em> (text) are mapped to the common domain via interpolators <em>f(u)<\/em> and <em>f(v)<\/em>. The parameters \u03b7, R\u03b4, and \u03b3 respectively measure the alignment between different modalities, the within-class compactness, and the separation between different classes. (Images: wikipedia.org)<\/span><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-65 aligncenter\" src=\"https:\/\/blog.metu.edu.tr\/velif\/files\/2020\/07\/illus_model-300x178.png\" alt=\"\" width=\"515\" height=\"306\" srcset=\"https:\/\/blog.metu.edu.tr\/velif\/files\/2020\/07\/illus_model-300x178.png 300w, https:\/\/blog.metu.edu.tr\/velif\/files\/2020\/07\/illus_model-1024x608.png 1024w, https:\/\/blog.metu.edu.tr\/velif\/files\/2020\/07\/illus_model-768x456.png 768w, https:\/\/blog.metu.edu.tr\/velif\/files\/2020\/07\/illus_model-1536x912.png 1536w, https:\/\/blog.metu.edu.tr\/velif\/files\/2020\/07\/illus_model-2048x1216.png 2048w\" sizes=\"auto, (max-width: 515px) 100vw, 515px\" \/><\/p>\n<p>Our performance bounds indicate that for successful generalization in multi- modal classification and retrieval problems, the regularity of the interpolation functions extending the embedding to the whole data space is as important as the between-class separation and cross-modal alignment criteria. <span style=\"font-size: inherit\">We have then used these results to propose a multi-modal nonlinear representation learning algorithm, where the embeddings of the training samples are optimized jointly with the Lipschitz regularity of the interpolators.<\/span><\/p>\n<\/div>\n<\/div>\n<\/div>\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Estimation of Time-Varying Graph Signals from Incomplete Observations Many applications nowadays involve the acquisition of data over certain time intervals on platforms such as social networks, communication networks and irregular sensor arrays. Such data types can be modeled as time-varying graph signals. In many practical applications, it is not possible to observe graph signals as [&hellip;]<\/p>\n","protected":false},"author":4874,"featured_media":0,"parent":0,"menu_order":3,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":"","_links_to":"","_links_to_target":""},"class_list":["post-4","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/blog.metu.edu.tr\/velif\/wp-json\/wp\/v2\/pages\/4","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.metu.edu.tr\/velif\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/blog.metu.edu.tr\/velif\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/blog.metu.edu.tr\/velif\/wp-json\/wp\/v2\/users\/4874"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.metu.edu.tr\/velif\/wp-json\/wp\/v2\/comments?post=4"}],"version-history":[{"count":0,"href":"https:\/\/blog.metu.edu.tr\/velif\/wp-json\/wp\/v2\/pages\/4\/revisions"}],"wp:attachment":[{"href":"https:\/\/blog.metu.edu.tr\/velif\/wp-json\/wp\/v2\/media?parent=4"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}