Search Results for author: Tianzhe Wang

Found 7 papers, 3 papers with code

Learning Universe Model for Partial Matching Networks over Multiple Graphs

no code implementations19 Oct 2022 Zetian Jiang, Jiaxin Lu, Tianzhe Wang, Junchi Yan

We consider the general setting for partial matching of two or multiple graphs, in the sense that not necessarily all the nodes in one graph can find their correspondences in another graph and vice versa.

Graph Matching Metric Learning +1

Spanning Tree-based Graph Generation for Molecules

no code implementations ICLR 2022 Sungsoo Ahn, Binghong Chen, Tianzhe Wang, Le Song

In this paper, we explore the problem of generating molecules using deep neural networks, which has recently gained much interest in chemistry.

Graph Generation Molecular Graph Generation

Molecule Optimization by Explainable Evolution

no code implementations ICLR 2021 Binghong Chen, Tianzhe Wang, Chengtao Li, Hanjun Dai, Le Song

Optimizing molecules for desired properties is a fundamental yet challenging task in chemistry, material science and drug discovery.

Drug Discovery

APQ: Joint Search for Network Architecture, Pruning and Quantization Policy

1 code implementation CVPR 2020 Tianzhe Wang, Kuan Wang, Han Cai, Ji Lin, Zhijian Liu, Song Han

However, training this quantization-aware accuracy predictor requires collecting a large number of quantized <model, accuracy> pairs, which involves quantization-aware finetuning and thus is highly time-consuming.

Quantization

Once for All: Train One Network and Specialize it for Efficient Deployment

1 code implementation ICLR 2020 Han Cai, Chuang Gan, Tianzhe Wang, Zhekai Zhang, Song Han

Most of the traditional approaches either manually design or use neural architecture search (NAS) to find a specialized neural network and train it from scratch for each case, which is computationally expensive and unscalable.

Neural Architecture Search

Once-for-All: Train One Network and Specialize it for Efficient Deployment

10 code implementations26 Aug 2019 Han Cai, Chuang Gan, Tianzhe Wang, Zhekai Zhang, Song Han

On diverse edge devices, OFA consistently outperforms state-of-the-art (SOTA) NAS methods (up to 4. 0% ImageNet top1 accuracy improvement over MobileNetV3, or same accuracy but 1. 5x faster than MobileNetV3, 2. 6x faster than EfficientNet w. r. t measured latency) while reducing many orders of magnitude GPU hours and $CO_2$ emission.

Neural Architecture Search

Cannot find the paper you are looking for? You can Submit a new open access paper.