Paper

SurReal: Complex-Valued Learning as Principled Transformations on a Scaling and Rotation Manifold

Complex-valued data is ubiquitous in signal and image processing applications, and complex-valued representations in deep learning have appealing theoretical properties. While these aspects have long been recognized, complex-valued deep learning continues to lag far behind its real-valued counterpart. We propose a principled geometric approach to complex-valued deep learning. Complex-valued data could often be subject to arbitrary complex-valued scaling; as a result, real and imaginary components could co-vary. Instead of treating complex values as two independent channels of real values, we recognize their underlying geometry: We model the space of complex numbers as a product manifold of non-zero scaling and planar rotations. Arbitrary complex-valued scaling naturally becomes a group of transitive actions on this manifold. We propose to extend the property instead of the form of real-valued functions to the complex domain. We define convolution as weighted Fr\'echet mean on the manifold that is equivariant to the group of scaling/rotation actions, and define distance transform on the manifold that is invariant to the action group. The manifold perspective also allows us to define nonlinear activation functions such as tangent ReLU and G-transport, as well as residual connections on the manifold-valued data. We dub our model SurReal, as our experiments on MSTAR and RadioML deliver high performance with only a fractional size of real-valued and complex-valued baseline models.

Results in Papers With Code
(↓ scroll down to see all results)