Adversarial twin neural networks: maximizing physics recovery for physical system

29 Sep 2021  ·  Haoran Li, Erik Blasch, Jingyi Yuan, Yang Weng ·

The exact modeling of modern physical systems is challenging due to the expanding system territory and insufficient sensors. To tackle this problem, existing methods utilize sparse regression to find physical parameters or add another virtual learning model like a Neural Network (NN) to universally approximate the unobserved physical quantities. However, the two models can't perfectly play their own roles in joint learning without proper restrictions. Thus, we propose (1) sparsity regularization for the physical model and (2) physical superiority over the virtual model. They together define output boundaries for the physical and virtual models. Further, even the two models output properly, the joint model still can't guarantee learning maximal physical knowledge. For example, if the data of an observed node can linearly represent those of an unobserved node, these two nodes can be aggregated. Therefore, we propose (3) to seek the dissimilarity of physical and virtual outputs to obtain maximal physics. To achieve goals (1)-(3), we design a twin structure of the Physical Neural Network (PNN) and Virtual Neural Network (VNN), where sparse regularization and skip-connections are utilized to guarantee (1) and (2). Then, we propose an adversarial learning scheme to maximize output dissimilarity, achieving (3). We denote the model as the Adversarial Twin Neural Network (ATN). Finally, we conduct extensive experiments over various systems to demonstrate the best performance of ATN over other state-of-the-art methods.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here