Trending Research

Beyond Self-Supervision: A Simple Yet Effective Network Distillation Alternative to Improve Backbones

PaddlePaddle/PaddleClas 10 Mar 2021

Recently, research efforts have been concentrated on revealing how pre-trained model makes a difference in neural network performance.

Knowledge Distillation Object Detection +2

3.70 stars / hour

DouZero: Mastering DouDizhu with Self-Play Deep Reinforcement Learning

kwai/DouZero 11 Jun 2021

Games are abstractions of the real world, where artificial agents learn to compete and cooperate with other agents.

Multi-agent Reinforcement Learning

3.51 stars / hour

DeepLab2: A TensorFlow Library for Deep Labeling

google-research/deeplab2 17 Jun 2021

DeepLab2 is a TensorFlow library for deep labeling, aiming to provide a state-of-the-art and easy-to-use TensorFlow codebase for general dense pixel prediction problems in computer vision.

2.55 stars / hour

GIRAFFE: Representing Scenes as Compositional Generative Neural Feature Fields

autonomousvision/giraffe 24 Nov 2020

While several recent works investigate how to disentangle underlying factors of variation in the data, most of them operate in 2D and hence ignore that our world is three-dimensional.

Image Generation Neural Rendering

1.54 stars / hour

XCiT: Cross-Covariance Image Transformers

facebookresearch/xcit 17 Jun 2021

We propose a "transposed" version of self-attention that operates across feature channels rather than tokens, where the interactions are based on the cross-covariance matrix between keys and queries.

Instance Segmentation Object Detection +2

1.46 stars / hour

GANs N' Roses: Stable, Controllable, Diverse Image to Image Translation (works for videos too!)

mchong6/GANsNRoses 11 Jun 2021

This adversarial loss guarantees the map is diverse -- a very wide range of anime can be produced from a single content code.

Image-to-Image Translation

1.31 stars / hour

Lattice-BERT: Leveraging Multi-Granularity Representations in Chinese Pre-trained Language Models

alibaba/AliceMind NAACL 2021

Further analysis shows that Lattice-BERT can harness the lattice structures, and the improvement comes from the exploration of redundant information and multi-granularity representations.

Natural Language Understanding

1.24 stars / hour

PALM: Pre-training an Autoencoding&Autoregressive Language Model for Context-conditioned Generation

alibaba/AliceMind 14 Apr 2020

An extensive set of experiments show that PALM achieves new state-of-the-art results on a variety of language generation benchmarks covering generative question answering (Rank 1 on the official MARCO leaderboard), abstractive summarization on CNN/DailyMail as well as Gigaword, question generation on SQuAD, and conversational response generation on Cornell Movie Dialogues.

Abstractive Text Summarization Conversational Response Generation +6

1.24 stars / hour

Multi-granularity hierarchical attention fusion networks for reading comprehension and question answering

alibaba/AliceMind ACL 2018

Extensive experiments on the large-scale SQuAD and TriviaQA datasets validate the effectiveness of the proposed method.

Question Answering Reading Comprehension

1.24 stars / hour

GAN Prior Embedded Network for Blind Face Restoration in the Wild

yangxy/GPEN 13 May 2021

The proposed GAN prior embedded network (GPEN) is easy-to-implement, and it can generate visually photo-realistic results.

Blind Face Restoration Image Generation

0.95 stars / hour