Publications

Conference Paper

Learning Optimal Aerodynamic Designs through Multi-Fidelity Reduced-Dimensional Neural Networks

authors

X. Du, J. R. R. A. Martins, T. O’Leary-Roseberry, A. Chaudhuri, O. Ghattas, and K. E. Willcox

journal

AIAA SciTech Forum, 2023

doi

10.2514/6.2023-0334

For the minimum drag at desired flight requirements, aerodynamic optimization plays an important role in automating the aircraft design process. Specifically, existing optimization approaches commonly adopt derivative-free algorithms for low-dimensional design space, while gradient-based algorithms enabled by adjoint methods alleviate computational expenses of large-scale designs. However, repeatedly evaluating simulation models results in the fact that conventional optimizations still rely on high-performance computing clusters. Consequently, with the development of machine learning and deep learning emerge surrogate models in lieu of computationally intensive simulation models. In this paper, we propose a novel optimal design concept, namely, learning the mapping directly from design requirements to the optimal designs through neural network surrogates. Thus, the trained surrogate model directly predicts optimal designs with no need to run any optimizations. Considering the complex mapping and costly training data (one training sample corresponds to one simulation-based optimization), we first exploited the low-rank representation of the output space, i.e., optimal designs. Moreover, we developed the multi-fidelity version of the reduced-dimensional neural networks to reduce the computational cost even further. Results revealed that the reduced-dimensional neural networks achieved 95% predictive accuracy on the optimal designs using 120 high-fidelity training samples while full-space networks required 140. In addition, when provided with very few data (such as 20), reduced-dimensional networks increased the predictive accuracy by more than 5%. Furthermore, the multi-fidelity reduced-dimensional networks achieved 95% accuracy using 80 high-fidelity and 40 low-fidelity samples, computationally equal to 82 high-fidelity samples, making it 31.7% reduction on the computational cost.