Obstacle Avoidance for UAS in Continuous Action Space Using Deep Reinforcement Learning

Jueming Hu, Xuxi Yang, Weichang Wang, Peng Wei, Yongming Liu

Research output: Contribution to journalArticlepeer-review

10 Scopus citations


Obstacle avoidance for small unmanned aircraft is vital for the safety of future urban air mobility (UAM) and Unmanned Aircraft System (UAS) Traffic Management (UTM). There are a variety of techniques for real-time robust drone guidance, but numerous of them solve in discretized airspace and control, which would require an additional path smoothing step to provide flexible commands for UAS. To deliver safe and computationally efficient guidance for UAS operations, we explore the use of a deep reinforcement learning algorithm based on Proximal Policy Optimization (PPO) to lead autonomous UAS to their destinations while bypassing obstacles through continuous control. The proposed scenario state representation and reward function can map the continuous state space to continuous control for both heading angle and speed. To verify the effectiveness of the proposed learning framework, we conducted numerical experiments with static and moving obstacles. Uncertainties associated with the environments and safety operation bounds are investigated in detail. Results show that the proposed model is able to provide accurate and robust guidance and resolve conflict with a success rate of over 99%.

Original languageEnglish (US)
Pages (from-to)1
Number of pages1
JournalIEEE Access
StatePublished - 2022


  • Air traffic control
  • Aircraft
  • Collision avoidance
  • Games
  • Markov processes
  • Reinforcement learning
  • UAS obstacle avoidance
  • Uncertainty
  • continuous control
  • deep reinforcement learning
  • uncertainty

ASJC Scopus subject areas

  • General Engineering
  • General Materials Science
  • General Computer Science


Dive into the research topics of 'Obstacle Avoidance for UAS in Continuous Action Space Using Deep Reinforcement Learning'. Together they form a unique fingerprint.

Cite this