Abstract
When considering the proximity environment of a small body, the capability to navigate around it is of paramount importance to enable any onboard autonomous decision-making process. Onboard optical-based navigation is often performed by coupling image processing algorithms with filtering techniques to generate position and velocity estimates, providing compelling navigation performance with cost-effective hardware. These same processes could be addressed with data-driven ones, at the expense of a sufficiently large dataset. To investigate to what extent can these methods substitute traditional ones, in this paper we develop a possible onboard methodology based on segmentation masks, convolutional extreme learning machine architectures, and recurrent neural networks to respectively generate simpler image inputs, map single-frame data into position estimates, and process multiple-frame position sequences to generate both position and velocity estimates. Considering the primary of the Didymos binary system as a case study and the possibility to complement optical observations with LiDAR data, we show that recurrent neural networks would bring only limited improvement in position reconstruction for the case considered while they would be beneficial in estimating the velocity, especially when considering complementing LiDAR data.
Original language | English (US) |
---|---|
Pages (from-to) | 1-14 |
Number of pages | 14 |
Journal | IEEE Transactions on Aerospace and Electronic Systems |
DOIs | |
State | Accepted/In press - 2023 |
Keywords
- Computer architecture
- Feature extraction
- IP networks
- Image Processing
- Image segmentation
- Navigation
- Navigation
- Recurrent Neural Networks
- Recurrent neural networks
- Segmentation
- Small-Bodies
- Task analysis
ASJC Scopus subject areas
- Aerospace Engineering
- Electrical and Electronic Engineering