(found 1 matches in 0.001356s)
  
    
      - 
        
          
            Topological Autoencoders
              (2020)
          Michael  Moor, Max  Horn, Bastian  Rieck, Karsten  Borgwardt
            AbstractWe propose a novel approach for preserving topological structures of the input space in latent representations of autoencoders. Using persistent homology, a technique from topological data analysis, we calculate topological signatures of both the input and latent space to derive a topological loss term. Under weak theoretical assumptions, we construct this loss in a differentiable manner, such that the encoding learns to retain multi-scale connectivity information. We show that our approach is theoretically well-founded and that it exhibits favourable latent representations on a synthetic manifold as well as on real-world image data sets, while preserving low reconstruction errors.