Search Results for author: Ngai Wong

Found 44 papers, 15 papers with code

Rethinking Kullback-Leibler Divergence in Knowledge Distillation for Large Language Models

no code implementations3 Apr 2024 Taiqiang Wu, Chaofan Tao, Jiahao Wang, Zhe Zhao, Ngai Wong

Kullback-Leiber divergence has been widely used in Knowledge Distillation (KD) to compress Large Language Models (LLMs).

Knowledge Distillation

Taming Lookup Tables for Efficient Image Retouching

1 code implementation28 Mar 2024 Sidi Yang, Binxiao Huang, Mingdeng Cao, Yatai Ji, Hanzhong Guo, Ngai Wong, Yujiu Yang

Existing enhancement models often optimize for high performance while falling short of reducing hardware inference time and power consumption, especially on edge devices with constrained computing and storage resources.

Image Enhancement Image Retouching

LoRETTA: Low-Rank Economic Tensor-Train Adaptation for Ultra-Low-Parameter Fine-Tuning of Large Language Models

1 code implementation18 Feb 2024 Yifan Yang, Jiajun Zhou, Ngai Wong, Zheng Zhang

Various parameter-efficient fine-tuning (PEFT) techniques have been proposed to enable computationally efficient fine-tuning while maintaining model performance.

Multi-Task Learning

Learning Spatially Collaged Fourier Bases for Implicit Neural Representation

no code implementations28 Dec 2023 Jason Chun Lok Li, Chang Liu, Binxiao Huang, Ngai Wong

Existing approaches to Implicit Neural Representation (INR) can be interpreted as a global scene representation via a linear combination of Fourier bases of different frequencies.

3D Reconstruction 3D Shape Representation

A Unifying Tensor View for Lightweight CNNs

no code implementations15 Dec 2023 Jason Chun Lok Li, Rui Lin, Jiajun Zhou, Edmund Yin Mun Lam, Ngai Wong

Despite the decomposition of convolutional kernels for lightweight CNNs being well studied, existing works that rely on tensor network diagrams or hyperdimensional abstraction lack geometry intuition.

Hundred-Kilobyte Lookup Tables for Efficient Single-Image Super-Resolution

no code implementations11 Dec 2023 Binxiao Huang, Jason Chun Lok Li, Jie Ran, Boyu Li, Jiajun Zhou, Dahai Yu, Ngai Wong

Conventional super-resolution (SR) schemes make heavy use of convolutional neural networks (CNNs), which involve intensive multiply-accumulate (MAC) operations, and require specialized hardware such as graphics processing units.

Image Super-Resolution

Lite it fly: An All-Deformable-Butterfly Network

no code implementations14 Nov 2023 Rui Lin, Jason Chun Lok Li, Jiajun Zhou, Binxiao Huang, Jie Ran, Ngai Wong

Most deep neural networks (DNNs) consist fundamentally of convolutional and/or fully connected layers, wherein the linear transform can be cast as the product between a filter matrix and a data matrix obtained by arranging feature tensors into columns.

A Spectral Perspective towards Understanding and Improving Adversarial Robustness

no code implementations25 Jun 2023 Binxiao Huang, Rui Lin, Chaofan Tao, Ngai Wong

Deep neural networks (DNNs) are incredibly vulnerable to crafted, imperceptible adversarial perturbations.

Adversarial Robustness

Overcoming Beam Squint in Dual-Wideband mmWave MIMO Channel Estimation: A Bayesian Multi-Band Sparsity Approach

no code implementations19 Jun 2023 Le Xu, Lei Cheng, Ngai Wong, Yik-Chung Wu, H. Vincent Poor

A probabilistic model is built to induce the common sparsity in the spatial domain, and the first-order Taylor expansion is adopted to get rid of the grid mismatch in the dictionaries.

To Fold or Not to Fold: Graph Regularized Tensor Train for Visual Data Completion

1 code implementation19 Jun 2023 Le Xu, Lei Cheng, Ngai Wong, Yik-Chung Wu

Tensor train (TT) representation has achieved tremendous success in visual data completion tasks, especially when it is combined with tensor folding.

Context-Aware Transformer for 3D Point Cloud Automatic Annotation

no code implementations27 Mar 2023 Xiaoyan Qian, Chang Liu, Xiaojuan Qi, Siew-Chong Tan, Edmund Lam, Ngai Wong

3D automatic annotation has received increased attention since manually annotating 3D point clouds is laborious.

Object

Edge-free but Structure-aware: Prototype-Guided Knowledge Distillation from GNNs to MLPs

no code implementations24 Mar 2023 Taiqiang Wu, Zhe Zhao, Jiahao Wang, Xingyu Bai, Lei Wang, Ngai Wong, Yujiu Yang

Distilling high-accuracy Graph Neural Networks~(GNNs) to low-latency multilayer perceptrons~(MLPs) on graph tasks has become a hot research topic.

Knowledge Distillation

Frequency Regularization for Improving Adversarial Robustness

no code implementations24 Dec 2022 Binxiao Huang, Chaofan Tao, Rui Lin, Ngai Wong

Deep neural networks are incredibly vulnerable to crafted, human-imperceptible adversarial perturbations.

Adversarial Robustness

ODG-Q: Robust Quantization via Online Domain Generalization

no code implementations17 Oct 2022 Chaofan Tao, Ngai Wong

To our best knowledge, this work is the first work that trains both quantized and binary neural networks on ImageNet that consistently improve robustness under different attacks.

Domain Generalization Quantization

PECAN: A Product-Quantized Content Addressable Memory Network

no code implementations13 Aug 2022 Jie Ran, Rui Lin, Jason Chun Lok Li, Jiajun Zhou, Ngai Wong

A novel deep neural network (DNN) architecture is proposed wherein the filtering and linear transform are realized solely with product quantization (PQ).

Quantization

Multimodal Transformer for Automatic 3D Annotation and Object Detection

1 code implementation20 Jul 2022 Chang Liu, Xiaoyan Qian, Binxiao Huang, Xiaojuan Qi, Edmund Lam, Siew-Chong Tan, Ngai Wong

By enriching the sparse point clouds, our method achieves 4. 48\% and 4. 03\% better 3D AP on KITTI moderate and hard samples, respectively, versus the state-of-the-art autolabeler.

3D Object Detection Object +1

Multilayer Perceptron Based Stress Evolution Analysis under DC Current Stressing for Multi-segment Wires

no code implementations17 May 2022 Tianshu Hou, Peining Zhen, Ngai Wong, Quan Chen, Guoyong Shi, Shuqi Wang, Hai-Bao Chen

Electromigration (EM) is one of the major concerns in the reliability analysis of very large scale integration (VLSI) systems due to the continuous technology scaling.

MAP-Gen: An Automated 3D-Box Annotation Flow with Multimodal Attention Point Generator

no code implementations29 Mar 2022 Chang Liu, Xiaoyan Qian, Xiaojuan Qi, Edmund Y. Lam, Siew-Chong Tan, Ngai Wong

While a few previous studies tried to automatically generate 3D bounding boxes from weak labels such as 2D boxes, the quality is sub-optimal compared to human annotators.

object-detection Object Detection

Coarse to Fine: Image Restoration Boosted by Multi-Scale Low-Rank Tensor Completion

1 code implementation29 Mar 2022 Rui Lin, Cong Chen, Ngai Wong

Existing low-rank tensor completion (LRTC) approaches aim at restoring a partially observed tensor by imposing a global low-rank constraint on the underlying completed tensor.

Image Restoration

A Space-Time Neural Network for Analysis of Stress Evolution under DC Current Stressing

no code implementations29 Mar 2022 Tianshu Hou, Ngai Wong, Quan Chen, Zhigang Ji, Hai-Bao Chen

The electromigration (EM)-induced reliability issues in very large scale integration (VLSI) circuits have attracted increased attention due to the continuous technology scaling.

Deformable Butterfly: A Highly Structured and Sparse Linear Transform

1 code implementation NeurIPS 2021 Rui Lin, Jie Ran, King Hung Chiu, Graziano Chesi, Ngai Wong

We introduce a new kind of linear transform named Deformable Butterfly (DeBut) that generalizes the conventional butterfly matrices and can be adapted to various input-output dimensions.

Compression of Generative Pre-trained Language Models via Quantization

no code implementations ACL 2022 Chaofan Tao, Lu Hou, Wei zhang, Lifeng Shang, Xin Jiang, Qun Liu, Ping Luo, Ngai Wong

We find that previous quantization methods fail on generative tasks due to the \textit{homogeneous word embeddings} caused by reduced capacity, and \textit{varied distribution of weights}.

Model Compression Quantization +1

Exploiting Elasticity in Tensor Ranks for Compressing Neural Networks

no code implementations10 May 2021 Jie Ran, Rui Lin, Hayden K. H. So, Graziano Chesi, Ngai Wong

Elasticities in depth, width, kernel size and resolution have been explored in compressing deep neural networks (DNNs).

EZCrop: Energy-Zoned Channels for Robust Output Pruning

1 code implementation8 May 2021 Rui Lin, Jie Ran, Dongpeng Wang, King Hung Chiu, Ngai Wong

Recent results have revealed an interesting observation in a trained convolutional neural network (CNN), namely, the rank of a feature map channel matrix remains surprisingly constant despite the input images.

AET-EFN: A Versatile Design for Static and Dynamic Event-Based Vision

no code implementations22 Mar 2021 Chang Liu, Xiaojuan Qi, Edmund Lam, Ngai Wong

The neuromorphic event cameras, which capture the optical changes of a scene, have drawn increasing attention due to their high speed and low power consumption.

Event-based vision

FAT: Learning Low-Bitwidth Parametric Representation via Frequency-Aware Transformation

1 code implementation15 Feb 2021 Chaofan Tao, Rui Lin, Quan Chen, Zhaoyang Zhang, Ping Luo, Ngai Wong

Prior arts often discretize the network weights by carefully tuning hyper-parameters of quantization (e. g. non-uniform stepsize and layer-wise bitwidths), which are complicated and sub-optimal because the full-precision and low-precision models have a large discrepancy.

Neural Network Compression Quantization

Tensor Train Factorization and Completion under Noisy Data with Prior Analysis and Rank Estimation

no code implementations13 Oct 2020 Le Xu, Lei Cheng, Ngai Wong, Yik-Chung Wu

Tensor train (TT) decomposition, a powerful tool for analyzing multidimensional data, exhibits superior performance in many machine learning tasks.

Image Classification Variational Inference

HOTCAKE: Higher Order Tucker Articulated Kernels for Deeper CNN Compression

no code implementations28 Feb 2020 Rui Lin, Ching-Yun Ko, Zhuolun He, Cong Chen, Yuan Cheng, Hao Yu, Graziano Chesi, Ngai Wong

The emerging edge computing has promoted immense interests in compacting a neural network without sacrificing much accuracy.

Edge-computing Tensor Decomposition

Kernelized Support Tensor Train Machines

no code implementations2 Jan 2020 Cong Chen, Kim Batselier, Wenjian Yu, Ngai Wong

In this paper, we propose a tensor train (TT)-based kernel technique for the first time, and apply it to the conventional support vector machine (SVM) for image classification.

BIG-bench Machine Learning Image Classification

Fastened CROWN: Tightened Neural Network Robustness Certificates

1 code implementation2 Dec 2019 Zhaoyang Lyu, Ching-Yun Ko, Zhifeng Kong, Ngai Wong, Dahua Lin, Luca Daniel

We draw inspiration from such work and further demonstrate the optimality of deterministic CROWN (Zhang et al. 2018) solutions in a given linear programming problem under mild constraints.

MiSC: Mixed Strategies Crowdsourcing

no code implementations17 May 2019 Ching-Yun Ko, Rui Lin, Shu Li, Ngai Wong

Popular crowdsourcing techniques mostly focus on evaluating workers' labeling quality before adjusting their weights during label aggregation.

POPQORN: Quantifying Robustness of Recurrent Neural Networks

2 code implementations17 May 2019 Ching-Yun Ko, Zhaoyang Lyu, Tsui-Wei Weng, Luca Daniel, Ngai Wong, Dahua Lin

The vulnerability to adversarial attacks has been a critical issue for deep neural networks.

Matrix Product Operator Restricted Boltzmann Machines

no code implementations12 Nov 2018 Cong Chen, Kim Batselier, Ching-Yun Ko, Ngai Wong

This work presents the matrix product operator RBM (MPORBM) that utilizes a tensor network generalization of Mv/TvRBM, preserves input formats in both the visible and hidden layers, and results in higher expressive power.

Denoising Dimensionality Reduction +1

Deep Compression of Sum-Product Networks on Tensor Networks

no code implementations9 Nov 2018 Ching-Yun Ko, Cong Chen, Yuke Zhang, Kim Batselier, Ngai Wong

Sum-product networks (SPNs) represent an emerging class of neural networks with clear probabilistic semantics and superior inference speed over graphical models.

Tensor Networks

A Support Tensor Train Machine

no code implementations17 Apr 2018 Cong Chen, Kim Batselier, Ching-Yun Ko, Ngai Wong

There has been growing interest in extending traditional vector-based machine learning techniques to their tensor forms.

BIG-bench Machine Learning

Parallelized Tensor Train Learning of Polynomial Classifiers

1 code implementation20 Dec 2016 Zhongming Chen, Kim Batselier, Johan A. K. Suykens, Ngai Wong

In pattern classification, polynomial classifiers are well-studied methods as they are capable of generating complex decision surfaces.

General Classification

A Tensor Network Kalman filter with an application in recursive MIMO Volterra system identification

1 code implementation18 Oct 2016 Kim Batselier, Zhongming Chen, Ngai Wong

This article introduces a Tensor Network Kalman filter, which can estimate state vectors that are exponentially large without ever having to explicitly construct them.

Systems and Control

A Constructive Algorithm for Decomposing a Tensor into a Finite Sum of Orthonormal Rank-1 Terms

1 code implementation7 Jul 2014 Kim Batselier, Haotian Liu, Ngai Wong

We propose a constructive algorithm that decomposes an arbitrary real tensor into a finite sum of orthonormal rank-1 outer products.

Numerical Analysis Numerical Analysis

Cannot find the paper you are looking for? You can Submit a new open access paper.