Search Results for author: Ming Tan

Found 23 papers, 8 papers with code

Code Representation Learning At Scale

no code implementations2 Feb 2024 Dejiao Zhang, Wasi Ahmad, Ming Tan, Hantian Ding, Ramesh Nallapati, Dan Roth, Xiaofei Ma, Bing Xiang

Recent studies have shown that code language models at scale demonstrate significant performance gains on downstream tasks, i. e., code generation.

Code Generation Contrastive Learning +3

Improving Prompt Tuning with Learned Prompting Layers

no code implementations31 Oct 2023 Wei Zhu, Ming Tan

Prompt tuning prepends a soft prompt to the input embeddings or hidden states and only optimizes the prompt to adapt pretrained models (PTMs) to downstream tasks.

Cross-View Hierarchy Network for Stereo Image Super-Resolution

1 code implementation13 Apr 2023 Wenbin Zou, Hongxia Gao, Liang Chen, Yunchen Zhang, Mingchao Jiang, Zhongxin Yu, Ming Tan

Stereo image super-resolution aims to improve the quality of high-resolution stereo image pairs by exploiting complementary information across views.

Stereo Image Super-Resolution

Multi-lingual Evaluation of Code Generation Models

2 code implementations26 Oct 2022 Ben Athiwaratkun, Sanjay Krishna Gouda, Zijian Wang, Xiaopeng Li, Yuchen Tian, Ming Tan, Wasi Uddin Ahmad, Shiqi Wang, Qing Sun, Mingyue Shang, Sujan Kumar Gonugondla, Hantian Ding, Varun Kumar, Nathan Fulton, Arash Farahani, Siddhartha Jain, Robert Giaquinto, Haifeng Qian, Murali Krishna Ramanathan, Ramesh Nallapati, Baishakhi Ray, Parminder Bhatia, Sudipta Sengupta, Dan Roth, Bing Xiang

Using these benchmarks, we are able to assess the performance of code generation models in a multi-lingual fashion, and discovered generalization ability of language models on out-of-domain languages, advantages of multi-lingual models over mono-lingual, the ability of few-shot prompting to teach the model new languages, and zero-shot translation abilities even on mono-lingual settings.

Code Completion Code Translation +1

DQ-BART: Efficient Sequence-to-Sequence Model via Joint Distillation and Quantization

2 code implementations ACL 2022 Zheng Li, Zijian Wang, Ming Tan, Ramesh Nallapati, Parminder Bhatia, Andrew Arnold, Bing Xiang, Dan Roth

Empirical analyses show that, despite the challenging nature of generative tasks, we were able to achieve a 16. 5x model footprint compression ratio with little performance drop relative to the full-precision counterparts on multiple summarization and QA datasets.

Knowledge Distillation Model Compression +2

New Benchmark for Household Garbage Image Recognition

no code implementations24 Feb 2022 Zhize Wu, Huanyi Li, XiaoFeng Wang, Zijun Wu, Le Zou, Lixiang Xu, Ming Tan

Household garbage images are usually faced with complex backgrounds, variable illuminations, diverse angles, and changeable shapes, which bring a great difficulty in garbage image classification.

Classification Image Classification +1

Skeleton Based Action Recognition using a Stacked Denoising Autoencoder with Constraints of Privileged Information

no code implementations12 Mar 2020 Zhize Wu, Thomas Weise, Le Zou, Fei Sun, Ming Tan

Differing from the previous studies, we propose a new method called Denoising Autoencoder with Temporal and Categorical Constraints (DAE_CTC)} to study the skeletal representation in a view of skeleton reconstruction.

Action Recognition Denoising +2

Context-Aware Conversation Thread Detection in Multi-Party Chat

no code implementations IJCNLP 2019 Ming Tan, Dakuo Wang, Yupeng Gao, Haoyu Wang, Saloni Potdar, Xiaoxiao Guo, Shiyu Chang, Mo Yu

In multi-party chat, it is common for multiple conversations to occur concurrently, leading to intermingled conversation threads in chat logs.

Group Chat Ecology in Enterprise Instant Messaging: How Employees Collaborate Through Multi-User Chat Channels on Slack

no code implementations4 Jun 2019 Dakuo Wang, Haoyu Wang, Mo Yu, Zahra Ashktorab, Ming Tan

We cross-referenced 117 project teams and their team-based Slack channels and identified 57 teams that appeared in both datasets, then we built a regression model to reveal the relationship between these group communication styles and the project team performance.

Descriptive

FastHybrid: A Hybrid Model for Efficient Answer Selection

no code implementations COLING 2016 Lidan Wang, Ming Tan, Jiawei Han

In this paper, we propose an extremely efficient hybrid model (FastHybrid) that tackles the problem from both an accuracy and scalability point of view.

Answer Selection Information Retrieval +2

Attentive Pooling Networks

3 code implementations11 Feb 2016 Cicero dos Santos, Ming Tan, Bing Xiang, Bo-Wen Zhou

In this work, we propose Attentive Pooling (AP), a two-way attention mechanism for discriminative model training.

Answer Selection Representation Learning

LSTM-based Deep Learning Models for Non-factoid Answer Selection

2 code implementations12 Nov 2015 Ming Tan, Cicero dos Santos, Bing Xiang, Bo-Wen Zhou

One direction is to define a more composite representation for questions and answers by combining convolutional neural network with the basic framework.

Answer Selection

Direct 0-1 Loss Minimization and Margin Maximization with Boosting

no code implementations NeurIPS 2013 Shaodan Zhai, Tian Xia, Ming Tan, Shaojun Wang

We propose a boosting method, DirectBoost, a greedy coordinate descent algorithm that builds an ensemble classifier of weak classifiers through directly minimizing empirical classification error over labeled training examples; once the training classification error is reduced to a local coordinatewise minimum, DirectBoost runs a greedy coordinate ascent algorithm that continuously adds weak classifiers to maximize any targeted arbitrarily defined margins until reaching a local coordinatewise maximum of the margins in a certain sense.

Classification General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.