Search Results for author: Meng-Hao Guo

Found 10 papers, 7 papers with code

CharacterGen: Efficient 3D Character Generation from Single Images with Multi-View Pose Canonicalization

no code implementations27 Feb 2024 Hao-Yang Peng, Jia-Peng Zhang, Meng-Hao Guo, Yan-Pei Cao, Shi-Min Hu

In the field of digital content creation, generating high-quality 3D characters from single images is challenging, especially given the complexities of various body poses and the issues of self-occlusion and pose ambiguity.

Long Range Pooling for 3D Large-Scale Scene Understanding

no code implementations CVPR 2023 Xiang-Li Li, Meng-Hao Guo, Tai-Jiang Mu, Ralph R. Martin, Shi-Min Hu

To achieve the above properties, we propose a simple yet effective long range pooling (LRP) module using dilation max pooling, which provides a network with a large adaptive receptive field.

Scene Understanding

SegNeXt: Rethinking Convolutional Attention Design for Semantic Segmentation

3 code implementations18 Sep 2022 Meng-Hao Guo, Cheng-Ze Lu, Qibin Hou, ZhengNing Liu, Ming-Ming Cheng, Shi-Min Hu

Notably, SegNeXt outperforms EfficientNet-L2 w/ NAS-FPN and achieves 90. 6% mIoU on the Pascal VOC 2012 test leaderboard using only 1/10 parameters of it.

Segmentation Semantic Segmentation

Visual Attention Network

18 code implementations20 Feb 2022 Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, Shi-Min Hu

In this paper, we propose a novel linear attention named large kernel attention (LKA) to enable self-adaptive and long-range correlations in self-attention while avoiding its shortcomings.

Image Classification Instance Segmentation +5

Is Attention Better Than Matrix Decomposition?

3 code implementations ICLR 2021 Zhengyang Geng, Meng-Hao Guo, Hongxu Chen, Xia Li, Ke Wei, Zhouchen Lin

As an essential ingredient of modern deep learning, attention mechanism, especially self-attention, plays a vital role in the global correlation discovery.

Conditional Image Generation Semantic Segmentation

Subdivision-Based Mesh Convolution Networks

1 code implementation4 Jun 2021 Shi-Min Hu, Zheng-Ning Liu, Meng-Hao Guo, Jun-Xiong Cai, Jiahui Huang, Tai-Jiang Mu, Ralph R. Martin

Meshes with arbitrary connectivity can be remeshed to have Loop subdivision sequence connectivity via self-parameterization, making SubdivNet a general approach.

3D Classification

Can Attention Enable MLPs To Catch Up With CNNs?

no code implementations31 May 2021 Meng-Hao Guo, Zheng-Ning Liu, Tai-Jiang Mu, Dun Liang, Ralph R. Martin, Shi-Min Hu

In the first week of May, 2021, researchers from four different institutions: Google, Tsinghua University, Oxford University and Facebook, shared their latest work [16, 7, 12, 17] on arXiv. org almost at the same time, each proposing new learning architectures, consisting mainly of linear layers, claiming them to be comparable, or even superior to convolutional-based models.

Beyond Self-attention: External Attention using Two Linear Layers for Visual Tasks

7 code implementations5 May 2021 Meng-Hao Guo, Zheng-Ning Liu, Tai-Jiang Mu, Shi-Min Hu

Attention mechanisms, especially self-attention, have played an increasingly important role in deep feature representation for visual tasks.

Image Classification Image Generation +5

PCT: Point cloud transformer

11 code implementations17 Dec 2020 Meng-Hao Guo, Jun-Xiong Cai, Zheng-Ning Liu, Tai-Jiang Mu, Ralph R. Martin, Shi-Min Hu

It is inherently permutation invariant for processing a sequence of points, making it well-suited for point cloud learning.

3D Part Segmentation 3D Point Cloud Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.