Search Results for author: Weijie Su

Found 22 papers, 10 papers with code

Shifted Interpolation for Differential Privacy

1 code implementation1 Mar 2024 Jinho Bok, Weijie Su, Jason M. Altschuler

Notably, this leads to the first exact privacy analysis in the foundational setting of strongly convex optimization.

WildfireGPT: Tailored Large Language Model for Wildfire Analysis

no code implementations12 Feb 2024 Yangxinyu Xie, Tanwi Mallick, Joshua David Bergerson, John K. Hutchison, Duane R. Verner, Jordan Branham, M. Ross Alexander, Robert B. Ross, Yan Feng, Leslie-Anne Levy, Weijie Su

The recent advancement of large language models (LLMs) represents a transformational capability at the frontier of artificial intelligence (AI) and machine learning (ML).

Language Modelling Large Language Model

InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks

2 code implementations21 Dec 2023 Zhe Chen, Jiannan Wu, Wenhai Wang, Weijie Su, Guo Chen, Sen Xing, Muyan Zhong, Qinglong Zhang, Xizhou Zhu, Lewei Lu, Bin Li, Ping Luo, Tong Lu, Yu Qiao, Jifeng Dai

However, the progress in vision and vision-language foundation models, which are also critical elements of multi-modal AGI, has not kept pace with LLMs.

 Ranked #1 on Zero-Shot Video Retrieval on MSR-VTT-full (using extra training data)

Image Retrieval Image-to-Text Retrieval +10

Towards All-in-one Pre-training via Maximizing Multi-modal Mutual Information

1 code implementation CVPR 2023 Weijie Su, Xizhou Zhu, Chenxin Tao, Lewei Lu, Bin Li, Gao Huang, Yu Qiao, Xiaogang Wang, Jie zhou, Jifeng Dai

It has been proved that combining multiple pre-training strategies and data from various modalities/sources can greatly boost the training of large-scale models.

Ranked #2 on Semantic Segmentation on ADE20K (using extra training data)

Image Classification Long-tailed Object Detection +3

The alignment property of SGD noise and how it helps select flat minima: A stability analysis

no code implementations6 Jul 2022 Lei Wu, Mingze Wang, Weijie Su

In this paper, we provide an explanation of this striking phenomenon by relating the particular noise structure of SGD to its \emph{linear stability} (Wu et al., 2018).

Siamese Image Modeling for Self-Supervised Vision Representation Learning

2 code implementations CVPR 2023 Chenxin Tao, Xizhou Zhu, Weijie Su, Gao Huang, Bin Li, Jie zhou, Yu Qiao, Xiaogang Wang, Jifeng Dai

Driven by these analysis, we propose Siamese Image Modeling (SiameseIM), which predicts the dense representations of an augmented view, based on another masked view from the same image but with different augmentations.

Representation Learning Self-Supervised Learning +1

You Are the Best Reviewer of Your Own Papers: An Owner-Assisted Scoring Mechanism

no code implementations NeurIPS 2021 Weijie Su

To address this withholding of information, in this paper, I introduce the Isotonic Mechanism, a simple and efficient approach to improving on the imprecise raw scores by leveraging certain information that the owner is incentivized to provide.

Deformable DETR: Deformable Transformers for End-to-End Object Detection

17 code implementations ICLR 2021 Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, Jifeng Dai

DETR has been recently proposed to eliminate the need for many hand-designed components in object detection while demonstrating good performance.

Real-Time Object Detection

Benign Overfitting and Noisy Features

no code implementations6 Aug 2020 Zhu Li, Weijie Su, Dino Sejdinovic

Modern machine learning often operates in the regime where the number of parameters is much higher than the number of data points, with zero training loss and yet good generalization, thereby contradicting the classical bias-variance trade-off.

Algorithmic Analysis and Statistical Estimation of SLOPE via Approximate Message Passing

1 code implementation NeurIPS 2019 Zhiqi Bu, Jason Klusowski, Cynthia Rush, Weijie Su

SLOPE is a relatively new convex optimization procedure for high-dimensional linear regression via the sorted l1 penalty: the larger the rank of the fitted coefficient, the larger the penalty.

Group SLOPE - adaptive selection of groups of predictors

1 code implementation17 Oct 2016 Damian Brzyski, Alexej Gossmann, Weijie Su, Malgorzata Bogdan

Sorted L-One Penalized Estimation (SLOPE) is a relatively new convex optimization procedure which allows for adaptive selection of regressors under sparse high dimensional designs.

Methodology 46N10 G.1.6

False Discoveries Occur Early on the Lasso Path

3 code implementations5 Nov 2015 Weijie Su, Malgorzata Bogdan, Emmanuel Candes

In regression settings where explanatory variables have very low correlations and there are relatively few effects, each of large magnitude, we expect the Lasso to find the important variables with few errors, if any.

Communication-Efficient False Discovery Rate Control via Knockoff Aggregation

no code implementations17 Jun 2015 Weijie Su, Junyang Qian, Linxi Liu

The false discovery rate (FDR)---the expected fraction of spurious discoveries among all the discoveries---provides a popular statistical assessment of the reproducibility of scientific studies in various disciplines.

SLOPE is Adaptive to Unknown Sparsity and Asymptotically Minimax

no code implementations29 Mar 2015 Weijie Su, Emmanuel Candes

We consider high-dimensional sparse regression problems in which we observe $y = X \beta + z$, where $X$ is an $n \times p$ design matrix and $z$ is an $n$-dimensional vector of independent Gaussian errors, each with variance $\sigma^2$.

Statistics Theory Information Theory Information Theory Statistics Theory

A Differential Equation for Modeling Nesterov's Accelerated Gradient Method: Theory and Insights

no code implementations4 Mar 2015 Weijie Su, Stephen Boyd, Emmanuel J. Candes

We derive a second-order ordinary differential equation (ODE) which is the limit of Nesterov's accelerated gradient method.

A Differential Equation for Modeling Nesterov’s Accelerated Gradient Method: Theory and Insights

no code implementations NeurIPS 2014 Weijie Su, Stephen Boyd, Emmanuel Candes

We derive a second-order ordinary differential equation (ODE), which is the limit of Nesterov’s accelerated gradient method.

SLOPE - Adaptive variable selection via convex optimization

no code implementations14 Jul 2014 Małgorzata Bogdan, Ewout van den Berg, Chiara Sabatti, Weijie Su, Emmanuel J. Candès

SLOPE, short for Sorted L-One Penalized Estimation, is the solution to \[\min_{b\in\mathbb{R}^p}\frac{1}{2}\Vert y-Xb\Vert _{\ell_2}^2+\lambda_1\vert b\vert _{(1)}+\lambda_2\vert b\vert_{(2)}+\cdots+\lambda_p\vert b\vert_{(p)},\] where $\lambda_1\ge\lambda_2\ge\cdots\ge\lambda_p\ge0$ and $\vert b\vert_{(1)}\ge\vert b\vert_{(2)}\ge\cdots\ge\vert b\vert_{(p)}$ are the decreasing absolute values of the entries of $b$.

Methodology

Cannot find the paper you are looking for? You can Submit a new open access paper.