54 papers with code • 0 benchmarks • 0 datasets

This task has no description! Would you like to contribute one?

No evaluation results yet. Help compare methods by
submitting
evaluation metrics.

We introduce Margin Disparity Discrepancy, a novel measurement with rigorous generalization bounds, tailored to the distribution comparison with the asymmetric margin loss, and to the minimax optimization for easier training.

A prominent technique for self-supervised representation learning has been to contrast semantically similar and dissimilar pairs of samples.

We give a novel, simple and intuitive generalization-error bound showing that the expected ITE estimation error of a representation is bounded by a sum of the standard generalization-error of that representation and the distance between the treated and control distributions induced by the representation.

Ranked #1 on Causal Inference on IDHP

Further, we provide transfer and generalization bounds based on task and state similarity, along with sample complexity bounds that depend on the aggregate number of samples across tasks, rather than the number of tasks, a significant improvement over prior work that use the same environment assumptions.

Designing an incentive compatible auction that maximizes expected revenue is an intricate task.

We present GraphMix, a regularization method for Graph Neural Network based semi-supervised object classification, whereby we propose to train a fully-connected network jointly with the graph neural network via parameter sharing and interpolation-based regularization.

Ranked #1 on Node Classification on Cora: fixed 5 node per class

We identify a necessary property for such an architecture: each of the layers must preserve the gradient norm during backpropagation.

In the presence of noisy or incorrect labels, neural networks have the undesirable tendency to memorize information about the noise.

Our main technical result is a generalization bound for compressed networks based on the compressed size.

We pose causal inference as the problem of learning to classify probability distributions.