Computational Efficiency

857 papers with code • 0 benchmarks • 0 datasets

This task has no description! Would you like to contribute one?

Libraries

Use these libraries to find Computational Efficiency models and implementations

Most implemented papers

Cluster-GCN: An Efficient Algorithm for Training Deep and Large Graph Convolutional Networks

google-research/google-research KDD 2019

Furthermore, Cluster-GCN allows us to train much deeper GCN without much time and memory overhead, which leads to improved prediction accuracy---using a 5-layer Cluster-GCN, we achieve state-of-the-art test F1 score 99. 36 on the PPI dataset, while the previous best result was 98. 71 by [16].

A Transformer-based Framework for Multivariate Time Series Representation Learning

gzerveas/mvts_transformer 6 Oct 2020

In this work we propose for the first time a transformer-based framework for unsupervised representation learning of multivariate time series.

Towards Good Practices for Very Deep Two-Stream ConvNets

yjxiong/caffe 8 Jul 2015

However, for action recognition in videos, the improvement of deep convolutional networks is not so evident.

Distribution-Free Predictive Inference For Regression

ryantibs/conformal 14 Apr 2016

In the spirit of reproducibility, all of our empirical results can also be easily (re)generated using this package.

Continual Learning Through Synaptic Intelligence

ganguli-lab/pathint ICML 2017

While deep learning has led to remarkable advances across diverse applications, it struggles in domains where the data distribution changes over the course of learning.

GraphGAN: Graph Representation Learning with Generative Adversarial Nets

hwwang55/GraphGAN 22 Nov 2017

The goal of graph representation learning is to embed each vertex in a graph into a low-dimensional vector space.

Multi-level Wavelet-CNN for Image Restoration

lpj0/MWCNN 18 May 2018

With the modified U-Net architecture, wavelet transform is introduced to reduce the size of feature maps in the contracting subnetwork.

RWKV: Reinventing RNNs for the Transformer Era

BlinkDL/RWKV-LM 22 May 2023

This work presents a significant step towards reconciling trade-offs between computational efficiency and model performance in sequence processing tasks.

Discovering and Deciphering Relationships Across Disparate Data Modalities

neurodata/mgcpy 16 Sep 2016

Understanding the relationships between different properties of data, such as whether a connectome or genome has information about disease status, is becoming increasingly important in modern biological datasets.

Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer

davidmrau/mixture-of-experts 23 Jan 2017

In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters.