Search Results for author: Tse-Wei Chen

Found 8 papers, 0 papers with code

CASSOD-Net: Cascaded and Separable Structures of Dilated Convolution for Embedded Vision Systems and Applications

no code implementations29 Apr 2021 Tse-Wei Chen, Deyu Wang, Wei Tao, Dongchao Wen, Lingxiao Yin, Tadayuki Ito, Kinya Osa, Masami Kato

In this paper, we propose a network module, Cascaded and Separable Structure of Dilated (CASSOD) Convolution, and a special hardware system to handle the CASSOD networks efficiently.

Face Detection Image Segmentation +1

Hardware Architecture of Embedded Inference Accelerator and Analysis of Algorithms for Depthwise and Large-Kernel Convolutions

no code implementations29 Apr 2021 Tse-Wei Chen, Wei Tao, Deyu Wang, Dongchao Wen, Kinya Osa, Masami Kato

In order to handle modern convolutional neural networks (CNNs) efficiently, a hardware architecture of CNN inference accelerator is proposed to handle depthwise convolutions and regular convolutions, which are both essential building blocks for embedded-computer-vision algorithms.

Face Detection Image Classification

BAMSProd: A Step towards Generalizing the Adaptive Optimization Methods to Deep Binary Model

no code implementations29 Sep 2020 Junjie Liu, Dongchao Wen, Deyu Wang, Wei Tao, Tse-Wei Chen, Kinya Osa, Masami Kato

In this paper, we provide an explicit convex optimization example where training the BNNs with the traditionally adaptive optimization methods still faces the risk of non-convergence, and identify that constraining the range of gradients is critical for optimizing the deep binary model to avoid highly suboptimal solutions.

Quantization

QuantNet: Learning to Quantize by Learning within Fully Differentiable Framework

no code implementations10 Sep 2020 Junjie Liu, Dongchao Wen, Deyu Wang, Wei Tao, Tse-Wei Chen, Kinya Osa, Masami Kato

Despite the achievements of recent binarization methods on reducing the performance degradation of Binary Neural Networks (BNNs), gradient mismatching caused by the Straight-Through-Estimator (STE) still dominates quantized networks.

Binarization Image Classification +1

IFQ-Net: Integrated Fixed-point Quantization Networks for Embedded Vision

no code implementations19 Nov 2019 Hongxing Gao, Wei Tao, Dongchao Wen, Tse-Wei Chen, Kinya Osa, Masami Kato

Furthermore, based on YOLOv2, we design IFQ-Tinier-YOLO face detector which is a fixed-point network with 256x reduction in model size (246k Bytes) than Tiny-YOLO.

Face Detection Image Classification +1

DupNet: Towards Very Tiny Quantized CNN with Improved Accuracy for Face Detection

no code implementations13 Nov 2019 Hongxing Gao, Wei Tao, Dongchao Wen, Junjie Liu, Tse-Wei Chen, Kinya Osa, Masami Kato

Firstly, we employ weights with duplicated channels for the weight-intensive layers to reduce the model size.

 Ranked #1 on Face Detection on WIDER Face (GFLOPs metric)

Face Detection Quantization

Knowledge Representing: Efficient, Sparse Representation of Prior Knowledge for Knowledge Distillation

no code implementations13 Nov 2019 Junjie Liu, Dongchao Wen, Hongxing Gao, Wei Tao, Tse-Wei Chen, Kinya Osa, Masami Kato

Despite the recent works on knowledge distillation (KD) have achieved a further improvement through elaborately modeling the decision boundary as the posterior knowledge, their performance is still dependent on the hypothesis that the target network has a powerful capacity (representation ability).

Ranked #182 on Image Classification on CIFAR-10 (using extra training data)

Image Classification Knowledge Distillation

Cannot find the paper you are looking for? You can Submit a new open access paper.