🍩 Database of Original & NonTheoretical Uses of Topology
(found 8 matches in 0.002887s)


Topological Graph Neural Networks (2021)
Max Horn, Edward De Brouwer, Michael Moor, Yves Moreau, Bastian Rieck, Karsten BorgwardtAbstract
Graph neural networks (GNNs) are a powerful architecture for tackling graph learning tasks, yet have been shown to be oblivious to eminent substructures, such as cycles. We present TOGL, a novel layer that incorporates global topological information of a graph using persistent homology. TOGL can be easily integrated into any type of GNN and is strictly more expressive in terms of the WeisfeilerLehman test of isomorphism. Augmenting GNNs with our layer leads to beneficial predictive performance, both on synthetic data sets, which can be trivially classified by humans but not by ordinary GNNs, and on realworld data. 
Reconstructing Linearly Embedded Graphs: A First Step to Stratified Space Learning (2021)
Yossi Bokor, Christopher Williams, Katharine TurnerCommunity Resources

Filtration Curves for Graph Representation (2021)
Leslie O'Bray, Bastian Rieck, Karsten BorgwardtAbstract
The two predominant approaches to graph comparison in recent years are based on (i) enumerating matching subgraphs or (ii) comparing neighborhoods of nodes. In this work, we complement these two perspectives with a third way of representing graphs: using filtration curves from topological data analysis that capture both edge weight information and global graph structure. Filtration curves are highly efficient to compute and lead to expressive representations of graphs, which we demonstrate on graph classification benchmark datasets. Our work opens the door to a new form of graph representation in data mining. 
Graph Filtration Learning (2020)
Christoph Hofer, Florian Graf, Bastian Rieck, Marc Niethammer, Roland KwittAbstract
We propose an approach to learning with graphstructured data in the problem domain of graph classification. In particular, we present a novel type of readout operation to aggregate node features into a graphlevel representation. To this end, we leverage persistent homology computed via a realvalued, learnable, filter function. We establish the theoretical foundation for differentiating through the persistent homology computation. Empirically, we show that this type of readout operation compares favorably to previous techniques, especially when the graph connectivity structure is informative for the learning problem. 
Graph Classification via Heat Diffusion on Simplicial Complexes (2020)
Mehmet Emin Aktas, Esra AkbasAbstract
In this paper, we study the graph classification problem in vertexlabeled graphs. Our main goal is to classify the graphs comparing their higherorder structures thanks to heat diffusion on their simplices. We first represent vertexlabeled graphs as simplexweighted supergraphs. We then define the diffusion Frechet function over their simplices to encode the higherorder network topology and finally reach our goal by combining the function values with machine learning algorithms. Our experiments on realworld bioinformatics networks show that using diffusion Fr\éḩet function on simplices is promising in graph classification and more effective than the baseline methods. To the best of our knowledge, this paper is the first paper in the literature using heat diffusion on higherdimensional simplices in a graph mining problem. We believe that our method can be extended to different graph mining domains, not only the graph classification problem. 
A Persistent WeisfeilerLehman Procedure for Graph Classification (2019)
Bastian Rieck, Christian Bock, Karsten BorgwardtAbstract
The Weisfeiler–Lehman graph kernel exhibits competitive performance in many graph classification tasks. However, its subtree features are not able to capture connected components and cycles, topological features known for characterising graphs. To extract such features, we leverage propagated node label information and transform unweighted graphs into metric ones. This permits us to augment the subtree features with topological information obtained using persistent homology, a concept from topological data analysis. Our method, which we formalise as a generalisation of Weisfeiler–Lehman subtree features, exhibits favourable classification accuracy and its improvements in predictive performance are mainly driven by including cycle information. 
Learning Representations of Persistence Barcodes (2019)
Christoph D. Hofer, Roland Kwitt, Marc NiethammerAbstract
We consider the problem of supervised learning with summary representations of topological features in data. In particular, we focus on persistent homology, the prevalent tool used in topological data analysis. As the summary representations, referred to as barcodes or persistence diagrams, come in the unusual format of multi sets, equipped with computationally expensive metrics, they can not readily be processed with conventional learning techniques. While different approaches to address this problem have been proposed, either in the context of kernelbased learning, or via carefully designed vectorization techniques, it remains an open problem how to leverage advances in representation learning via deep neural networks. Appropriately handling topological summaries as input to neural networks would address the disadvantage of previous strategies which handle this type of data in a taskagnostic manner. In particular, we propose an approach that is designed to learn a taskspecific representation of barcodes. In other words, we aim to learn a representation that adapts to the learning problem while, at the same time, preserving theoretical properties (such as stability). This is done by projecting barcodes into a finite dimensional vector space using a collection of parametrized functionals, so called structure elements, for which we provide a generic construction scheme. A theoretical analysis of this approach reveals sufficient conditions to preserve stability, and also shows that different choices of structure elements lead to great differences with respect to their suitability for numerical optimization. When implemented as a neural network input layer, our approach demonstrates compelling performance on various types of problems, including graph classification and eigenvalue prediction, the classification of 2D/3D object shapes and recognizing activities from EEG signals.