TOWARDS MESH-BASED DEEP LEARNING FOR SEMANTIC SEGMENTATION IN PHOTOGRAMMETRY

This research is the first to apply MeshCNN – a deep learning model that is specifically designed for 3D triangular meshes – in the photogrammetry domain. We highlight the challenges that arise when applying a mesh-based deep learning model to a photogrammetric mesh, especially w.r.t. data set prope...

Full description

Bibliographic Details
Main Authors: M. Knott, R. Groenendijk
Format: Article
Language:English
Published: Copernicus Publications 2021-06-01
Series:ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Online Access:https://www.isprs-ann-photogramm-remote-sens-spatial-inf-sci.net/V-2-2021/59/2021/isprs-annals-V-2-2021-59-2021.pdf
id doaj-6ed09960025940159699eb2493e631f9
record_format Article
spelling doaj-6ed09960025940159699eb2493e631f92021-06-17T20:17:11ZengCopernicus PublicationsISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences2194-90422194-90502021-06-01V-2-2021596610.5194/isprs-annals-V-2-2021-59-2021TOWARDS MESH-BASED DEEP LEARNING FOR SEMANTIC SEGMENTATION IN PHOTOGRAMMETRYM. Knott0M. Knott1R. Groenendijk2University of Amsterdam, The NetherlandsCloudflight Germany GmbH, GermanyUniversity of Amsterdam, The NetherlandsThis research is the first to apply MeshCNN – a deep learning model that is specifically designed for 3D triangular meshes – in the photogrammetry domain. We highlight the challenges that arise when applying a mesh-based deep learning model to a photogrammetric mesh, especially w.r.t. data set properties. We provide solutions on how to prepare a remotely sensed mesh for a machine learning task. The most notable pre-processing step proposed is a novel application of the Breadth-First Search algorithm for chunking a large mesh into computable pieces. Furthermore, this work extends MeshCNN such that photometric features based on the mesh texture are considered in addition to the geometric information. Experiments show that including color information improves the predictive performance of the model by a large margin. Besides, experimental results indicate that segmentation performance could be advanced substantially with the introduction of a high-quality benchmark for semantic segmentation on meshes.https://www.isprs-ann-photogramm-remote-sens-spatial-inf-sci.net/V-2-2021/59/2021/isprs-annals-V-2-2021-59-2021.pdf
collection DOAJ
language English
format Article
sources DOAJ
author M. Knott
M. Knott
R. Groenendijk
spellingShingle M. Knott
M. Knott
R. Groenendijk
TOWARDS MESH-BASED DEEP LEARNING FOR SEMANTIC SEGMENTATION IN PHOTOGRAMMETRY
ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences
author_facet M. Knott
M. Knott
R. Groenendijk
author_sort M. Knott
title TOWARDS MESH-BASED DEEP LEARNING FOR SEMANTIC SEGMENTATION IN PHOTOGRAMMETRY
title_short TOWARDS MESH-BASED DEEP LEARNING FOR SEMANTIC SEGMENTATION IN PHOTOGRAMMETRY
title_full TOWARDS MESH-BASED DEEP LEARNING FOR SEMANTIC SEGMENTATION IN PHOTOGRAMMETRY
title_fullStr TOWARDS MESH-BASED DEEP LEARNING FOR SEMANTIC SEGMENTATION IN PHOTOGRAMMETRY
title_full_unstemmed TOWARDS MESH-BASED DEEP LEARNING FOR SEMANTIC SEGMENTATION IN PHOTOGRAMMETRY
title_sort towards mesh-based deep learning for semantic segmentation in photogrammetry
publisher Copernicus Publications
series ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences
issn 2194-9042
2194-9050
publishDate 2021-06-01
description This research is the first to apply MeshCNN – a deep learning model that is specifically designed for 3D triangular meshes – in the photogrammetry domain. We highlight the challenges that arise when applying a mesh-based deep learning model to a photogrammetric mesh, especially w.r.t. data set properties. We provide solutions on how to prepare a remotely sensed mesh for a machine learning task. The most notable pre-processing step proposed is a novel application of the Breadth-First Search algorithm for chunking a large mesh into computable pieces. Furthermore, this work extends MeshCNN such that photometric features based on the mesh texture are considered in addition to the geometric information. Experiments show that including color information improves the predictive performance of the model by a large margin. Besides, experimental results indicate that segmentation performance could be advanced substantially with the introduction of a high-quality benchmark for semantic segmentation on meshes.
url https://www.isprs-ann-photogramm-remote-sens-spatial-inf-sci.net/V-2-2021/59/2021/isprs-annals-V-2-2021-59-2021.pdf
work_keys_str_mv AT mknott towardsmeshbaseddeeplearningforsemanticsegmentationinphotogrammetry
AT mknott towardsmeshbaseddeeplearningforsemanticsegmentationinphotogrammetry
AT rgroenendijk towardsmeshbaseddeeplearningforsemanticsegmentationinphotogrammetry
_version_ 1721373624920702976