TY - GEN
T1 - Adaptive pinpoint and fuel efficient Mars landing using Reinforcement Learning
AU - Gaudet, Brian
AU - Furfaro, Roberto
PY - 2012
Y1 - 2012
N2 - Future unconstrained and science-driven missions to Mars will require advanced guidance algorithms that are able to adapt to more demanding mission requirements, e.g. landing on selected locales with pinpoint accuracy while autonomously flying fuel-efficient trajectories. In this paper, we will present a novel guidance algorithm designed by applying the principles of Reinforcement Learning (RL) theory. The goal is to devise an adaptive guidance algorithm that enables robust, fuel efficient, and accurate landing without the need for off-line trajectory generation. Results from a Monte Carlo simulation campaign show that the algorithm is capable of autonomously flying trajectories that are close to the optimal minimum-fuel solutions with an accuracy that surpasses conventional Apollo-like guidance algorithms. The proposed RL-based guidance algorithm exhibits a high degree of flexibility and can easily accommodate autonomous retargeting while maintaining accuracy and fuel efficiency. Although reinforcement learning and other similar machine learning techniques have been previously applied to aerospace guidance and control problems (e.g., autonomous helicopter control), this appears, to the best of our knowledge, to be the first application of reinforcement learning to the problem of autonomous planetary landing.
AB - Future unconstrained and science-driven missions to Mars will require advanced guidance algorithms that are able to adapt to more demanding mission requirements, e.g. landing on selected locales with pinpoint accuracy while autonomously flying fuel-efficient trajectories. In this paper, we will present a novel guidance algorithm designed by applying the principles of Reinforcement Learning (RL) theory. The goal is to devise an adaptive guidance algorithm that enables robust, fuel efficient, and accurate landing without the need for off-line trajectory generation. Results from a Monte Carlo simulation campaign show that the algorithm is capable of autonomously flying trajectories that are close to the optimal minimum-fuel solutions with an accuracy that surpasses conventional Apollo-like guidance algorithms. The proposed RL-based guidance algorithm exhibits a high degree of flexibility and can easily accommodate autonomous retargeting while maintaining accuracy and fuel efficiency. Although reinforcement learning and other similar machine learning techniques have been previously applied to aerospace guidance and control problems (e.g., autonomous helicopter control), this appears, to the best of our knowledge, to be the first application of reinforcement learning to the problem of autonomous planetary landing.
UR - http://www.scopus.com/inward/record.url?scp=84879398490&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84879398490&partnerID=8YFLogxK
M3 - Conference contribution
SN - 9780877035817
T3 - Advances in the Astronautical Sciences
SP - 1309
EP - 1328
BT - Spaceflight Mechanics 2012 - Advances in the Astronautical Sciences
T2 - 22nd AAS/AIAA Space Flight Mechanics Meeting
Y2 - 2 February 2012 through 2 February 2012
ER -