Rotated MNIST

18 papers with code • 1 benchmarks • 1 datasets

This task has no description! Would you like to contribute one?

Datasets


Most implemented papers

Domain Generalization using Causal Matching

microsoft/robustdg arXiv 2020

In the domain generalization literature, a common objective is to learn representations independent of the domain after conditioning on the class label.

CyCNN: A Rotation Invariant CNN using Polar Mapping and Cylindrical Convolution Layers

mcrl/CyCNN 21 Jul 2020

Deep Convolutional Neural Networks (CNNs) are empirically known to be invariant to moderate translation but not to rotation in image classification.

Learning Partial Equivariances from Data

merlresearch/partial_gcnn 19 Oct 2021

Frequently, transformations occurring in data can be better represented by a subset of a group than by a group as a whole, e. g., rotations in $[-90^{\circ}, 90^{\circ}]$.

Exploiting Redundancy: Separable Group Convolutional Networks on Lie Groups

david-knigge/separable-group-convolutional-networks 25 Oct 2021

In addition, thanks to the increase in computational efficiency, we are able to implement G-CNNs equivariant to the $\mathrm{Sim(2)}$ group; the group of dilations, rotations and translations.

Learning Invariant Representations for Equivariant Neural Networks Using Orthogonal Moments

jaspreetsinghmaan/g-cnn-orims 22 Sep 2022

The final classification layer in equivariant neural networks is invariant to different affine geometric transformations such as rotation, reflection and translation, and the scalar value is obtained by either eliminating the spatial dimensions of filter responses using convolution and down-sampling throughout the network or average is taken over the filter responses.

Learning unfolded networks with a cyclic group structure

manosth/cyclical_groups 16 Nov 2022

Deep neural networks lack straightforward ways to incorporate domain knowledge and are notoriously considered black boxes.

Artificial Neuronal Ensembles with Learned Context Dependent Gating

m-j-tilley/LXDG 17 Jan 2023

Finally, there is a regularization term responsible for ensuring that new tasks are encoded in gates that are as orthogonal as possible from previously used ones.