Self-supervised feature extraction for 3D axon segmentation

© 2020 IEEE. Existing learning-based methods to automatically trace axons in 3D brain imagery often rely on manually annotated segmentation labels. Labeling is a labor-intensive process and is not scalable to whole-brain analysis, which is needed for improved understanding of brain function. We prop...

Full description

Bibliographic Details
Main Authors: Klinghoffer, T (Author), Morales, P (Author), Park, YG (Author), Evans, N (Author), Chung, K (Author), Brattain, LJ (Author)
Format: Article
Language:English
Published: IEEE, 2021-11-04T15:25:08Z.
Subjects:
Online Access:Get fulltext
Description
Summary:© 2020 IEEE. Existing learning-based methods to automatically trace axons in 3D brain imagery often rely on manually annotated segmentation labels. Labeling is a labor-intensive process and is not scalable to whole-brain analysis, which is needed for improved understanding of brain function. We propose a self-supervised auxiliary task that utilizes the tube-like structure of axons to build a feature extractor from unlabeled data. The proposed auxiliary task constrains a 3D convolutional neural network (CNN) to predict the order of permuted slices in an input 3D volume. By solving this task, the 3D CNN is able to learn features without ground-truth labels that are useful for downstream segmentation with the 3D U-Net model. To the best of our knowledge, our model is the first to perform automated segmentation of axons imaged at subcellular resolution with the SHIELD technique. We demonstrate improved segmentation performance over the 3D U-Net model on both the SHIELD PVGPe dataset and the BigNeuron Project, single neuron Janelia dataset.