Leveraging Intrinsic Gradient Information for Further Training of Differentiable Machine Learning Models

30 Nov 2021  ·  Chris McDonagh, Xi Chen ·

Designing models that produce accurate predictions is the fundamental objective of machine learning (ML). This work presents methods demonstrating that when the derivatives of target variables (outputs) with respect to inputs can be extracted from processes of interest, e.g., neural networks (NN) based surrogate models, they can be leveraged to further improve the accuracy of differentiable ML models. This paper generalises the idea and provides practical methodologies that can be used to leverage gradient information (GI) across a variety of applications including: (1) Improving the performance of generative adversarial networks (GANs); (2) efficiently tuning NN model complexity; (3) regularising linear regressions. Numerical results show that GI can effective enhance ML models with existing datasets, demonstrating its value for a variety of applications.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods