Abstract
Low-fidelity engineering-level dynamic models are commonly employed while designing uncrewed aircraft flight controllers due to their rapid development and cost-effectiveness. However, during adverse conditions, or complex path-following missions, the uncertainties in low-fidelity models often result in suboptimal controller performance. Aircraft system identification techniques offer alternative methods for finding higher fidelity dynamic models but can be restrictive in flight test requirements and procedures. This challenge is exacerbated when there is no pilot onboard. This work introduces data-driven machine learning (ML) to enhance the fidelity of aircraft dynamic models, overcoming the limitations of conventional system identification. A large dataset from twelve previous flights is utilized within an ML framework to create a long short-term memory (LSTM) model for the aircraft's lateral-directional dynamics. A deep reinforcement learning (RL)-based flight controller is developed using a randomized dynamic domain created using the LSTM and physics-based models to quantify the impact of LSTM dynamic model improvements on controller performance. The RL controller performance is compared to other modern controller techniques in four actual flight tests in the presence of exogenous disturbances and noise, assessing its tracking capabilities and its ability to reject disturbances. The RL controller with a randomized dynamic domain outperforms an RL controller trained using only the engineering-level dynamic model, a linear quadratic regulator controller, and an L1 adaptive controller. Notably, it demonstrated up to 72% improvements in lateral tracking when the aircraft had to follow challenging paths and during intentional adverse onboard conditions.