🍩 Database of Original & Non-Theoretical Uses of Topology

(found 5 matches in 0.001644s)
  1. Theory and Algorithms for Constructing Discrete Morse Complexes From Grayscale Digital Images (2011)

    V. Robins, P. J. Wood, A. P. Sheppard
    Abstract We present an algorithm for determining the Morse complex of a two or three-dimensional grayscale digital image. Each cell in the Morse complex corresponds to a topological change in the level sets (i.e., a critical point) of the grayscale image. Since more than one critical point may be associated with a single image voxel, we model digital images by cubical complexes. A new homotopic algorithm is used to construct a discrete Morse function on the cubical complex that agrees with the digital image and has exactly the number and type of critical cells necessary to characterize the topological changes in the level sets. We make use of discrete Morse theory and simple homotopy theory to prove correctness of this algorithm. The resulting Morse complex is considerably simpler than the cubical complex originally used to represent the image and may be used to compute persistent homology.
  2. Histopathological Cancer Detection With Topological Signatures (2023)

    Ankur Yadav, Faisal Ahmed, Ovidiu Daescu, Reyhan Gedik, Baris Coskunuzer
    Abstract We present a transformative approach to histopathological cancer detection and grading by introducing a very powerful feature extraction method based on the latest topological data analysis tools. By analyzing the evolution of topological patterns in different color channels, we discovered that every tumor class leaves its own topological footprint in histopathological images, allowing to extract feature vectors that can be used to reliably identify tumor classes.Our topological signatures, even when combined with traditional machine learning methods, provide very fast and highly accurate results in various settings. While most DL models work well for one type of cancer, our model easily adapts to different scenarios, and consistently gives highly competitive results with the state-of-the-art models on benchmark datasets across multiple cancer types including bone, colon, breast, cervical (cytopathology), and prostate cancer. Unlike most DL models, our proposed Topo-ML model does not need any data augmentation or pre-processing steps and works perfectly on small datasets. The model is computationally very efficient, with end-to-end processing taking only a few hours for datasets consisting of thousands of images.
  3. Feature Detection and Hypothesis Testing for Extremely Noisy Nanoparticle Images Using Topological Data Analysis (2023)

    Andrew M. Thomas, Peter A. Crozier, Yuchen Xu, David S. Matteson
    Abstract We propose a flexible algorithm for feature detection and hypothesis testing in images with ultra-low signal-to-noise ratio using cubical persistent homology. Our main application is in the identification of atomic columns and other features in Transmission Electron Microscopy (TEM). Cubical persistent homology is used to identify local minima and their size in subregions in the frames of nanoparticle videos, which are hypothesized to correspond to relevant atomic features. We compare the performance of our algorithm to other employed methods for the detection of columns and their intensity. Additionally, Monte Carlo goodness-of-fit testing using real-valued summaries of persistence diagrams derived from smoothed images (generated from pixels residing in the vacuum region of an image) is developed and employed to identify whether or not the proposed atomic features generated by our algorithm are due to noise. Using these summaries derived from the generated persistence diagrams, one can produce univariate time series for the nanoparticle videos, thus, providing a means for assessing fluxional behavior. A guarantee on the false discovery rate for multiple Monte Carlo testing of identical hypotheses is also established.

    Community Resources

  4. TILT: Topological Interface Recovery in Limited-Angle Tomography (2024)

    Elli Karvonen, Matti Lassas, Pekka Pankka, Samuli Siltanen
    Abstract A wavelet-based sparsity-promoting reconstruction method is studied in the context of tomography with severely limited projection data. Such imaging problems are ill-posed inverse problems, or very sensitive to measurement and modeling errors. The reconstruction method is based on minimizing a sum of a data discrepancy term based on an \$\ell\textasciicircum2\$-norm and another term containing an \$\ell\textasciicircum1\$-norm of a wavelet coefficient vector. Depending on the viewpoint, the method can be considered (i) as finding the Bayesian maximum a posteriori (MAP) estimate using a Besov-space \$B_\11\\textasciicircum\1\(\\mathbb T\\textasciicircum\2\)\$ prior, or (ii) as deterministic regularization with a Besov-norm penalty. The minimization is performed using a tailored primal-dual path following interior-point method, which is applicable to problems larger in scale than commercially available general-purpose optimization package algorithms. The choice of “regularization parameter” is done by a novel technique called the S-curve method, which can be used to incorporate a priori information on the sparsity of the unknown target to the reconstruction process. Numerical results are presented, focusing on uniformly sampled sparse-angle data. Both simulated and measured data are considered, and noise-robust and edge-preserving multiresolution reconstructions are achieved. In sparse-angle cases with simulated data the proposed method offers a significant improvement in reconstruction quality (measured in relative square norm error) over filtered back-projection (FBP) and Tikhonov regularization.