Vision-Based Spacecraft Pose Estimation via a Deep Convolutional Neural Network for Noncooperative Docking Operations

The capture of a target spacecraft by a chaser is an on-orbit docking operation that requires an accurate, reliable, and robust object recognition algorithm. Vision-based guided spacecraft relative motion during close-proximity maneuvers has been consecutively applied using dynamic modeling as a spa...

Full description

Bibliographic Details
Main Authors: Thaweerath Phisannupawong, Patcharin Kamsing, Peerapong Torteeka, Sittiporn Channumsin, Utane Sawangwit, Warunyu Hematulin, Tanatthep Jarawan, Thanaporn Somjit, Soemsak Yooyen, Daniel Delahaye, Pisit Boonsrimuang
Format: Article
Language:English
Published: MDPI AG 2020-08-01
Series:Aerospace
Subjects:
Online Access:https://www.mdpi.com/2226-4310/7/9/126
id doaj-588903e2d80547cf90ba5e3bf5078f63
record_format Article
spelling doaj-588903e2d80547cf90ba5e3bf5078f632020-11-25T03:49:36ZengMDPI AGAerospace2226-43102020-08-01712612610.3390/aerospace7090126Vision-Based Spacecraft Pose Estimation via a Deep Convolutional Neural Network for Noncooperative Docking OperationsThaweerath Phisannupawong0Patcharin Kamsing1Peerapong Torteeka2Sittiporn Channumsin3Utane Sawangwit4Warunyu Hematulin5Tanatthep Jarawan6Thanaporn Somjit7Soemsak Yooyen8Daniel Delahaye9Pisit Boonsrimuang10Air-Space Control, Optimization, and Management Laboratory, Department of Aeronautical Engineering, International Academy of Aviation Industry, King Mongkut’s Institute of Technology Ladkrabang, Bangkok 10520, ThailandAir-Space Control, Optimization, and Management Laboratory, Department of Aeronautical Engineering, International Academy of Aviation Industry, King Mongkut’s Institute of Technology Ladkrabang, Bangkok 10520, ThailandResearch Group, National Astronomical Research Institute of Thailand, ChiangMai 50180, ThailandAstrodynamics Research Laboratory, Geo-Informatics and Space Technology Development Agency (GISTDA), Chonburi 20230, ThailandResearch Group, National Astronomical Research Institute of Thailand, ChiangMai 50180, ThailandAir-Space Control, Optimization, and Management Laboratory, Department of Aeronautical Engineering, International Academy of Aviation Industry, King Mongkut’s Institute of Technology Ladkrabang, Bangkok 10520, ThailandAir-Space Control, Optimization, and Management Laboratory, Department of Aeronautical Engineering, International Academy of Aviation Industry, King Mongkut’s Institute of Technology Ladkrabang, Bangkok 10520, ThailandAir-Space Control, Optimization, and Management Laboratory, Department of Aeronautical Engineering, International Academy of Aviation Industry, King Mongkut’s Institute of Technology Ladkrabang, Bangkok 10520, ThailandAir-Space Control, Optimization, and Management Laboratory, Department of Aeronautical Engineering, International Academy of Aviation Industry, King Mongkut’s Institute of Technology Ladkrabang, Bangkok 10520, ThailandEcole Nationale de l’Aviation Civile, 31400 Toulouse, FranceFaculty of Engineering, King Mongkut’s Institute of Technology Ladkrabang, Bangkok 10520, ThailandThe capture of a target spacecraft by a chaser is an on-orbit docking operation that requires an accurate, reliable, and robust object recognition algorithm. Vision-based guided spacecraft relative motion during close-proximity maneuvers has been consecutively applied using dynamic modeling as a spacecraft on-orbit service system. This research constructs a vision-based pose estimation model that performs image processing via a deep convolutional neural network. The pose estimation model was constructed by repurposing a modified pretrained GoogLeNet model with the available Unreal Engine 4 rendered dataset of the Soyuz spacecraft. In the implementation, the convolutional neural network learns from the data samples to create correlations between the images and the spacecraft’s six degrees-of-freedom parameters. The experiment has compared an exponential-based loss function and a weighted Euclidean-based loss function. Using the weighted Euclidean-based loss function, the implemented pose estimation model achieved moderately high performance with a position accuracy of 92.53 percent and an error of 1.2 m. The in-attitude prediction accuracy can reach 87.93 percent, and the errors in the three Euler angles do not exceed 7.6 degrees. This research can contribute to spacecraft detection and tracking problems. Although the finished vision-based model is specific to the environment of synthetic dataset, the model could be trained further to address actual docking operations in the future.https://www.mdpi.com/2226-4310/7/9/126spacecraft docking operationon-orbit servicespose estimationdeep convolutional neural network
collection DOAJ
language English
format Article
sources DOAJ
author Thaweerath Phisannupawong
Patcharin Kamsing
Peerapong Torteeka
Sittiporn Channumsin
Utane Sawangwit
Warunyu Hematulin
Tanatthep Jarawan
Thanaporn Somjit
Soemsak Yooyen
Daniel Delahaye
Pisit Boonsrimuang
spellingShingle Thaweerath Phisannupawong
Patcharin Kamsing
Peerapong Torteeka
Sittiporn Channumsin
Utane Sawangwit
Warunyu Hematulin
Tanatthep Jarawan
Thanaporn Somjit
Soemsak Yooyen
Daniel Delahaye
Pisit Boonsrimuang
Vision-Based Spacecraft Pose Estimation via a Deep Convolutional Neural Network for Noncooperative Docking Operations
Aerospace
spacecraft docking operation
on-orbit services
pose estimation
deep convolutional neural network
author_facet Thaweerath Phisannupawong
Patcharin Kamsing
Peerapong Torteeka
Sittiporn Channumsin
Utane Sawangwit
Warunyu Hematulin
Tanatthep Jarawan
Thanaporn Somjit
Soemsak Yooyen
Daniel Delahaye
Pisit Boonsrimuang
author_sort Thaweerath Phisannupawong
title Vision-Based Spacecraft Pose Estimation via a Deep Convolutional Neural Network for Noncooperative Docking Operations
title_short Vision-Based Spacecraft Pose Estimation via a Deep Convolutional Neural Network for Noncooperative Docking Operations
title_full Vision-Based Spacecraft Pose Estimation via a Deep Convolutional Neural Network for Noncooperative Docking Operations
title_fullStr Vision-Based Spacecraft Pose Estimation via a Deep Convolutional Neural Network for Noncooperative Docking Operations
title_full_unstemmed Vision-Based Spacecraft Pose Estimation via a Deep Convolutional Neural Network for Noncooperative Docking Operations
title_sort vision-based spacecraft pose estimation via a deep convolutional neural network for noncooperative docking operations
publisher MDPI AG
series Aerospace
issn 2226-4310
publishDate 2020-08-01
description The capture of a target spacecraft by a chaser is an on-orbit docking operation that requires an accurate, reliable, and robust object recognition algorithm. Vision-based guided spacecraft relative motion during close-proximity maneuvers has been consecutively applied using dynamic modeling as a spacecraft on-orbit service system. This research constructs a vision-based pose estimation model that performs image processing via a deep convolutional neural network. The pose estimation model was constructed by repurposing a modified pretrained GoogLeNet model with the available Unreal Engine 4 rendered dataset of the Soyuz spacecraft. In the implementation, the convolutional neural network learns from the data samples to create correlations between the images and the spacecraft’s six degrees-of-freedom parameters. The experiment has compared an exponential-based loss function and a weighted Euclidean-based loss function. Using the weighted Euclidean-based loss function, the implemented pose estimation model achieved moderately high performance with a position accuracy of 92.53 percent and an error of 1.2 m. The in-attitude prediction accuracy can reach 87.93 percent, and the errors in the three Euler angles do not exceed 7.6 degrees. This research can contribute to spacecraft detection and tracking problems. Although the finished vision-based model is specific to the environment of synthetic dataset, the model could be trained further to address actual docking operations in the future.
topic spacecraft docking operation
on-orbit services
pose estimation
deep convolutional neural network
url https://www.mdpi.com/2226-4310/7/9/126
work_keys_str_mv AT thaweerathphisannupawong visionbasedspacecraftposeestimationviaadeepconvolutionalneuralnetworkfornoncooperativedockingoperations
AT patcharinkamsing visionbasedspacecraftposeestimationviaadeepconvolutionalneuralnetworkfornoncooperativedockingoperations
AT peerapongtorteeka visionbasedspacecraftposeestimationviaadeepconvolutionalneuralnetworkfornoncooperativedockingoperations
AT sittipornchannumsin visionbasedspacecraftposeestimationviaadeepconvolutionalneuralnetworkfornoncooperativedockingoperations
AT utanesawangwit visionbasedspacecraftposeestimationviaadeepconvolutionalneuralnetworkfornoncooperativedockingoperations
AT warunyuhematulin visionbasedspacecraftposeestimationviaadeepconvolutionalneuralnetworkfornoncooperativedockingoperations
AT tanatthepjarawan visionbasedspacecraftposeestimationviaadeepconvolutionalneuralnetworkfornoncooperativedockingoperations
AT thanapornsomjit visionbasedspacecraftposeestimationviaadeepconvolutionalneuralnetworkfornoncooperativedockingoperations
AT soemsakyooyen visionbasedspacecraftposeestimationviaadeepconvolutionalneuralnetworkfornoncooperativedockingoperations
AT danieldelahaye visionbasedspacecraftposeestimationviaadeepconvolutionalneuralnetworkfornoncooperativedockingoperations
AT pisitboonsrimuang visionbasedspacecraftposeestimationviaadeepconvolutionalneuralnetworkfornoncooperativedockingoperations
_version_ 1724494495937986560