BACK

The influence of adversarial training on turbulence closure modeling

TYPE OF PUBLICATION

Publication in conference proceeding / workshop

YEAR OF PUBLICATION

2022

PUBLISHER

AIAA SCITECH 2022 Forum

CITATION

Ludovico Nista, Christoph Karl David Schumann, Gandolfo Scialabba, Temistocle Grenga, Antonio Attili and Heinz Pitsch. "The influence of adversarial training on turbulence closure modeling," AIAA 2022-0185. AIAA SCITECH 2022 Forum. January 2022.

SHORT SUMMARY

Over the last years, fundamental advancements in deep learning frameworks combined with the availability of large highly-resolved datasets, as well as the exponential improvement in computer hardware performance have shown great promise to move beyond classical equation-based models for the turbulence closure. Deep convolutional neural networks (CNN) can be used to super-resolve low-resolution simulations, thus they become attractive for large eddy simulation subfilter-scale modeling. However, these models often lack generalization capabilities and cannot guarantee fields with high-wavenumber details. To tackle those problems, the use of generative adversarial networks (GAN), which are composed of two competing neural networks (a generator and a discriminator) has been proposed. Despite the remarkable performance of GAN in single-image super-reconstruction, its application in turbulence modeling applications is relatively unexplored. In this work, the contribution of adversarial training is assessed by comparing two types of deep neural networks: a supervised CNN-type model and a semi-supervised GAN-based model. This study demonstrates the ability of the GAN architecture to produce high-quality super-reconstructed fields compared to standard deep convolutional networks, enhancing subgrid physical structures. The prolonged adversarial training leads to extracting underlying small-dimensional features in a semi-supervised manner and, consequently, improved turbulence statistics. Finally, it is shown that the propensity of the GAN training to run into convergence oscillations can be limited by a proper selection of the learning rate for both generator and discriminator.

 

 

Link to the publication: https://arc.aiaa.org/doi/abs/10.2514/6.2022-0185