Search Results for author: Lemeng Wu

Found 21 papers, 8 papers with code

Communication Efficient Distributed Training with Distributed Lion

no code implementations30 Mar 2024 Bo Liu, Lemeng Wu, Lizhang Chen, Kaizhao Liang, Jiaxu Zhu, Chen Liang, Raghuraman Krishnamoorthi, Qiang Liu

The Lion optimizer has been a promising competitor with the AdamW for training large AI models, with advantages on memory, computation, and sample efficiency.

Language Rectified Flow: Advancing Diffusion Language Generation with Probabilistic Flows

no code implementations25 Mar 2024 Shujian Zhang, Lemeng Wu, Chengyue Gong, Xingchao Liu

Extensive experiments and ablation studies demonstrate that our method can be general, effective, and beneficial for many NLP tasks.

Language Modelling Sentence +1

EfficientSAM: Leveraged Masked Image Pretraining for Efficient Segment Anything

1 code implementation1 Dec 2023 Yunyang Xiong, Bala Varadarajan, Lemeng Wu, Xiaoyu Xiang, Fanyi Xiao, Chenchen Zhu, Xiaoliang Dai, Dilin Wang, Fei Sun, Forrest Iandola, Raghuraman Krishnamoorthi, Vikas Chandra

On segment anything task such as zero-shot instance segmentation, our EfficientSAMs with SAMI-pretrained lightweight image encoders perform favorably with a significant gain (e. g., ~4 AP on COCO/LVIS) over other fast SAM models.

Image Classification Instance Segmentation +5

AutoML-GPT: Automatic Machine Learning with GPT

no code implementations4 May 2023 Shujian Zhang, Chengyue Gong, Lemeng Wu, Xingchao Liu, Mingyuan Zhou

Ultimately, with this prompt paragraph, AutoML-GPT will automatically conduct the experiments from data processing to model architecture, hyperparameter tuning, and predicted training log.

AutoML

FlowGrad: Controlling the Output of Generative ODEs With Gradients

no code implementations CVPR 2023 Xingchao Liu, Lemeng Wu, Shujian Zhang, Chengyue Gong, Wei Ping, Qiang Liu

To further accelerate the computation of the back-propagation, we propose to use a non-uniform discretization to approximate the ODE trajectory, where we measure how straight the trajectory is and gather the straight parts into one discretization step.

Image Manipulation

PathFusion: Path-consistent Lidar-Camera Deep Feature Fusion

no code implementations12 Dec 2022 Lemeng Wu, Dilin Wang, Meng Li, Yunyang Xiong, Raghuraman Krishnamoorthi, Qiang Liu, Vikas Chandra

Fusing 3D LiDAR features with 2D camera features is a promising technique for enhancing the accuracy of 3D detection, thanks to their complementary physical properties.

Fast Point Cloud Generation with Straight Flows

1 code implementation CVPR 2023 Lemeng Wu, Dilin Wang, Chengyue Gong, Xingchao Liu, Yunyang Xiong, Rakesh Ranjan, Raghuraman Krishnamoorthi, Vikas Chandra, Qiang Liu

We perform evaluations on multiple 3D tasks and find that our PSF performs comparably to the standard diffusion model, outperforming other efficient 3D point cloud generation methods.

Point Cloud Completion

Neural Volumetric Mesh Generator

no code implementations6 Oct 2022 Yan Zheng, Lemeng Wu, Xingchao Liu, Zhen Chen, Qiang Liu, QiXing Huang

We first propose a diffusion-based generative model to tackle this problem by generating voxelized shapes with close-to-reality outlines and structures.

First Hitting Diffusion Models for Generating Manifold, Graph and Categorical Data

no code implementations2 Sep 2022 Mao Ye, Lemeng Wu, Qiang Liu

We propose a family of First Hitting Diffusion Models (FHDM), deep generative models that generate data with a diffusion process that terminates at a random first hitting time.

Diffusion-based Molecule Generation with Informative Prior Bridges

no code implementations2 Sep 2022 Lemeng Wu, Chengyue Gong, Xingchao Liu, Mao Ye, Qiang Liu

AI-based molecule generation provides a promising approach to a large area of biomedical sciences and engineering, such as antibody design, hydrolase engineering, or vaccine development.

3D Generation Point Cloud Generation

Let us Build Bridges: Understanding and Extending Diffusion Generative Models

no code implementations31 Aug 2022 Xingchao Liu, Lemeng Wu, Mao Ye, Qiang Liu

Diffusion-based generative models have achieved promising results recently, but raise an array of open questions in terms of conceptual understanding, theoretical analysis, algorithm improvement and extensions to discrete, structured, non-Euclidean domains.

Imputation

Residual Mixture of Experts

no code implementations20 Apr 2022 Lemeng Wu, Mengchen Liu, Yinpeng Chen, Dongdong Chen, Xiyang Dai, Lu Yuan

In this paper, we propose Residual Mixture of Experts (RMoE), an efficient training pipeline for MoE vision transformers on downstream tasks, such as segmentation and detection.

object-detection Object Detection

How to Fill the Optimum Set? Population Gradient Descent with Harmless Diversity

no code implementations16 Feb 2022 Chengyue Gong, Lemeng Wu, Qiang Liu

Although traditional optimization methods focus on finding a single optimal solution, most objective functions in modern machine learning problems, especially those in deep learning, often have multiple or infinite numbers of optima.

Text-to-Image Generation

FuseDream: Training-Free Text-to-Image Generation with Improved CLIP+GAN Space Optimization

1 code implementation2 Dec 2021 Xingchao Liu, Chengyue Gong, Lemeng Wu, Shujian Zhang, Hao Su, Qiang Liu

We approach text-to-image generation by combining the power of the retrained CLIP representation with an off-the-shelf image generator (GANs), optimizing in the latent space of GAN to find images that achieve maximum CLIP score with the given input text.

counterfactual Navigate +1

Firefly Neural Architecture Descent: a General Approach for Growing Neural Networks

1 code implementation NeurIPS 2020 Lemeng Wu, Bo Liu, Peter Stone, Qiang Liu

We propose firefly neural architecture descent, a general framework for progressively and dynamically growing neural networks to jointly optimize the networks' parameters and architectures.

Continual Learning Image Classification +1

Centroid Transformers: Learning to Abstract with Attention

no code implementations17 Feb 2021 Lemeng Wu, Xingchao Liu, Qiang Liu

Self-attention, as the key block of transformers, is a powerful mechanism for extracting features from the inputs.

Abstractive Text Summarization Clustering +1

Greedy Optimization Provably Wins the Lottery: Logarithmic Number of Winning Tickets is Enough

1 code implementation NeurIPS 2020 Mao Ye, Lemeng Wu, Qiang Liu

Despite the great success of deep learning, recent works show that large deep neural networks are often highly redundant and can be significantly reduced in size.

Steepest Descent Neural Architecture Optimization: Escaping Local Optimum with Signed Neural Splitting

no code implementations23 Mar 2020 Lemeng Wu, Mao Ye, Qi Lei, Jason D. Lee, Qiang Liu

Recently, Liu et al.[19] proposed a splitting steepest descent (S2D) method that jointly optimizes the neural parameters and architectures based on progressively growing network structures by splitting neurons into multiple copies in a steepest descent fashion.

Energy-Aware Neural Architecture Optimization with Fast Splitting Steepest Descent

1 code implementation ICLR 2020 Dilin Wang, Meng Li, Lemeng Wu, Vikas Chandra, Qiang Liu

Designing energy-efficient networks is of critical importance for enabling state-of-the-art deep learning in mobile and edge settings where the computation and energy budgets are highly limited.

Splitting Steepest Descent for Growing Neural Architectures

1 code implementation NeurIPS 2019 Qiang Liu, Lemeng Wu, Dilin Wang

We develop a progressive training approach for neural networks which adaptively grows the network structure by splitting existing neurons to multiple off-springs.

Path-Invariant Map Networks

1 code implementation CVPR 2019 Zaiwei Zhang, Zhenxiao Liang, Lemeng Wu, Xiaowei Zhou, Qi-Xing Huang

Optimizing a network of maps among a collection of objects/domains (or map synchronization) is a central problem across computer vision and many other relevant fields.

3D Semantic Segmentation Scene Segmentation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.