Search Results for author: Chu-Ren Huang

Found 95 papers, 3 papers with code

Inclusion in CSR Reports: The Lens from a Data-Driven Machine Learning Model

no code implementations CSRNLP (LREC) 2022 Lu Lu, Jinghang Gu, Chu-Ren Huang

Inclusion, as one of the foundations in the diversity, equity, and inclusion initiative, concerns the degree of being treated as an ingroup member in a workplace.

Framing Legitimacy in CSR: A Corpus of Chinese and American Petroleum Company CSR Reports and Preliminary Analysis

no code implementations CSRNLP (LREC) 2022 Jieyu Chen, Kathleen Ahrens, Chu-Ren Huang

The BUILDING source domain was used more often as gain frames in both Chinese and American CSR reports to show how oil companies create benefits for different stakeholders.

Lexicon of Changes: Towards the Evaluation of Diachronic Semantic Shift in Chinese

no code implementations LChange (ACL) 2022 Jing Chen, Emmanuele Chersoni, Chu-Ren Huang

Recent research has brought a wind of using computational approaches to the classic topic of semantic change, aiming to tackle one of the most challenging issues in the evolution of human language.

Decoding Word Embeddings with Brain-Based Semantic Features

no code implementations CL (ACL) 2021 Emmanuele Chersoni, Enrico Santus, Chu-Ren Huang, Alessandro Lenci

For each probing task, we identify the most relevant semantic features and we show that there is a correlation between the embedding performance and how they encode those features.

Retrieval Word Embeddings

Is Domain Adaptation Worth Your Investment? Comparing BERT and FinBERT on Financial Tasks

no code implementations EMNLP (ECONLP) 2021 Bo Peng, Emmanuele Chersoni, Yu-Yin Hsu, Chu-Ren Huang

With the recent rise in popularity of Transformer models in Natural Language Processing, research efforts have been dedicated to the development of domain-adapted versions of BERT-like architectures.

Continual Pretraining Domain Adaptation

Discovering Financial Hypernyms by Prompting Masked Language Models

no code implementations FNP (LREC) 2022 Bo Peng, Emmanuele Chersoni, Yu-Yin Hsu, Chu-Ren Huang

With the rising popularity of Transformer-based language models, several studies have tried to exploit their masked language modeling capabilities to automatically extract relational linguistic knowledge, although this kind of research has rarely investigated semantic relations in specialized domains.

Domain Adaptation Language Modelling +1

Scikit-talk: A toolkit for processing real-world conversational speech data

no code implementations SIGDIAL (ACL) 2021 Andreas Liesenfeld, Gabor Parti, Chu-Ren Huang

We present Scikit-talk, an open-source toolkit for processing collections of real-world conversational speech in Python.

ROCLING-2021 Shared Task: Dimensional Sentiment Analysis for Educational Texts

no code implementations ROCLING 2021 Liang-Chih Yu, Jin Wang, Bo Peng, Chu-Ren Huang

This paper presents the ROCLING 2021 shared task on dimensional sentiment analysis for educational texts which seeks to identify a real-value sentiment score of self-evaluation comments written by Chinese students in the both valence and arousal dimensions.

Sentiment Analysis

Cross-strait Variations on Two Near-synonymous Loanwords xie2shang1 and tan2pan4: A Corpus-based Comparative Study

no code implementations9 Oct 2022 Yueyue Huang, Chu-Ren Huang

This study attempts to investigate cross-strait variations on two typical synonymous loanwords in Chinese, i. e. xie2shang1 and tan2pan4, drawn on MARVS theory.

Automatic Analysis of Linguistic Features in Journal Articles of Different Academic Impacts with Feature Engineering Techniques

no code implementations15 Nov 2021 Siyu Lei, Ruiying Yang, Chu-Ren Huang

This study attempts to extract micro-level linguistic features in high- and moderate-impact journal RAs, using feature engineering methods.

Feature Engineering feature selection

PolyU CBS-Comp at SemEval-2021 Task 1: Lexical Complexity Prediction (LCP)

no code implementations SEMEVAL 2021 Rong Xiang, Jinghang Gu, Emmanuele Chersoni, Wenjie Li, Qin Lu, Chu-Ren Huang

In this contribution, we describe the system presented by the PolyU CBS-Comp Team at the Task 1 of SemEval 2021, where the goal was the estimation of the complexity of words in a given sentence context.

Lexical Complexity Prediction Sentence +1

Predicting gender and age categories in English conversations using lexical, non-lexical, and turn-taking features

no code implementations PACLIC 2020 Andreas Liesenfeld, Gábor Parti, Yu-Yin Hsu, Chu-Ren Huang

We explore differences in language use and turn-taking dynamics and identify a range of characteristics that set the categories apart.

Automatic Learning of Modality Exclusivity Norms with Crosslingual Word Embeddings

no code implementations Joint Conference on Lexical and Computational Semantics 2020 Emmanuele Chersoni, Rong Xiang, Qin Lu, Chu-Ren Huang

Our experiments focused on crosslingual word embeddings, in order to predict modality association scores by training on a high-resource language and testing on a low-resource one.

Word Embeddings

Sina Mandarin Alphabetical Words:A Web-driven Code-mixing Lexical Resource

no code implementations Asian Chapter of the Association for Computational Linguistics 2020 Rong Xiang, Mingyu Wan, Qi Su, Chu-Ren Huang, Qin Lu

Mandarin Alphabetical Word (MAW) is one indispensable component of Modern Chinese that demonstrates unique code-mixing idiosyncrasies influenced by language exchanges.

Using Conceptual Norms for Metaphor Detection

no code implementations WS 2020 Mingyu WAN, Kathleen Ahrens, Emmanuele Chersoni, Menghan Jiang, Qi Su, Rong Xiang, Chu-Ren Huang

This paper reports a linguistically-enriched method of detecting token-level metaphors for the second shared task on Metaphor Detection.

Are Word Embeddings Really a Bad Fit for the Estimation of Thematic Fit?

no code implementations LREC 2020 Emmanuele Chersoni, Ludovica Pannitto, Enrico Santus, Aless Lenci, ro, Chu-Ren Huang

While neural embeddings represent a popular choice for word representation in a wide variety of NLP tasks, their usage for thematic fit modeling has been limited, as they have been reported to lag behind syntax-based count models.

Word Embeddings

Affection Driven Neural Networks for Sentiment Analysis

no code implementations LREC 2020 Rong Xiang, Yunfei Long, Mingyu Wan, Jinghang Gu, Qin Lu, Chu-Ren Huang

Deep neural network models have played a critical role in sentiment analysis with promising results in the recent decade.

Sentiment Analysis

Distributional Semantics Meets Construction Grammar. towards a Unified Usage-Based Model of Grammar and Meaning

no code implementations WS 2019 Giulia Rambelli, Emmanuele Chersoni, Philippe Blache, Chu-Ren Huang, Aless Lenci, ro

In this paper, we propose a new type of semantic representation of Construction Grammar that combines constructions with the vector representations used in Distributional Semantics.

A Structured Distributional Model of Sentence Meaning and Processing

no code implementations17 Jun 2019 Emmanuele Chersoni, Enrico Santus, Ludovica Pannitto, Alessandro Lenci, Philippe Blache, Chu-Ren Huang

In this paper, we propose a Structured Distributional Model (SDM) that combines word embeddings with formal semantics and is based on the assumption that sentences represent events and situations.

Sentence Word Embeddings

A Report on the Third VarDial Evaluation Campaign

no code implementations WS 2019 Marcos Zampieri, Shervin Malmasi, Yves Scherrer, Tanja Samard{\v{z}}i{\'c}, Francis Tyers, Miikka Silfverberg, Natalia Klyueva, Tung-Le Pan, Chu-Ren Huang, Radu Tudor Ionescu, Andrei M. Butnaru, Tommi Jauhiainen

In this paper, we present the findings of the Third VarDial Evaluation Campaign organized as part of the sixth edition of the workshop on Natural Language Processing (NLP) for Similar Languages, Varieties and Dialects (VarDial), co-located with NAACL 2019.

Dialect Identification Morphological Analysis

A realistic and robust model for Chinese word segmentation

no code implementations21 May 2019 Chu-Ren Huang, Ting-Shuo Yo, Petr Simon, Shu-Kai Hsieh

Both experiments support the claim that the WBD model is a realistic model for Chinese word segmentation as it can be easily adapted for new variants with the robust result.

Chinese Word Segmentation Segmentation

Dual Memory Network Model for Biased Product Review Classification

no code implementations WS 2018 Yunfei Long, Mingyu Ma, Qin Lu, Rong Xiang, Chu-Ren Huang

In this work, we propose a dual user and product memory network (DUPMN) model to learn user profiles and product reviews using separate memory networks.

Classification General Classification +1

Fake News Detection Through Multi-Perspective Speaker Profiles

no code implementations IJCNLP 2017 Yunfei Long, Qin Lu, Rong Xiang, Minglei Li, Chu-Ren Huang

This paper proposes a novel method to incorporate speaker profiles into an attention based LSTM model for fake news detection.

Fake News Detection

A Cognition Based Attention Model for Sentiment Analysis

no code implementations EMNLP 2017 Yunfei Long, Qin Lu, Rong Xiang, Minglei Li, Chu-Ren Huang

Evaluations show the CBA based method outperforms the state-of-the-art local context based attention methods significantly.

Feature Engineering Product Recommendation +1

Leveraging Eventive Information for Better Metaphor Detection and Classification

no code implementations CONLL 2017 I-Hsuan Chen, Yunfei Long, Qin Lu, Chu-Ren Huang

We propose a set of syntactic conditions crucial to event structures to improve the model based on the classification of radical groups.

Classification Clustering +1

Selective Annotation of Sentence Parts: Identification of Relevant Sub-sentential Units

no code implementations WS 2016 Ge Xu, Xiaoyan Yang, Chu-Ren Huang

Many NLP tasks involve sentence-level annotation yet the relevant information is not encoded at sentence level but at some relevant parts of the sentence.

Binary Classification Opinion Mining +1

Testing APSyn against Vector Cosine on Similarity Estimation

no code implementations PACLIC 2016 Enrico Santus, Emmanuele Chersoni, Alessandro Lenci, Chu-Ren Huang, Philippe Blache

In Distributional Semantic Models (DSMs), Vector Cosine is widely used to estimate similarity between word vectors, although this measure was noticed to suffer from several shortcomings.

Word Embeddings

Representing Verbs with Rich Contexts: an Evaluation on Verb Similarity

no code implementations EMNLP 2016 Emmanuele Chersoni, Enrico Santus, Alessandro Lenci, Philippe Blache, Chu-Ren Huang

Several studies on sentence processing suggest that the mental lexicon keeps track of the mutual expectations between words.

Sentence

A lexicon of perception for the identification of synaesthetic metaphors in corpora

no code implementations LREC 2016 Francesca Strik Lievers, Chu-Ren Huang

Synaesthesia is a type of metaphor associating linguistic expressions that refer to two different sensory modalities.

Database of Mandarin Neighborhood Statistics

1 code implementation LREC 2016 Karl Neergaard, Hongzhi Xu, Chu-Ren Huang

In the design of controlled experiments with language stimuli, researchers from psycholinguistic, neurolinguistic, and related fields, require language resources that isolate variables known to affect language processing.

POS

Unsupervised Measure of Word Similarity: How to Outperform Co-occurrence and Vector Cosine in VSMs

no code implementations30 Mar 2016 Enrico Santus, Tin-Shing Chiu, Qin Lu, Alessandro Lenci, Chu-Ren Huang

In this paper, we claim that vector cosine, which is generally considered among the most efficient unsupervised measures for identifying word similarity in Vector Space Models, can be outperformed by an unsupervised measure that calculates the extent of the intersection among the most mutually dependent contexts of the target words.

Word Similarity

What a Nerd! Beating Students and Vector Cosine in the ESL and TOEFL Datasets

no code implementations LREC 2016 Enrico Santus, Tin-Shing Chiu, Qin Lu, Alessandro Lenci, Chu-Ren Huang

In this paper, we claim that Vector Cosine, which is generally considered one of the most efficient unsupervised measures for identifying word similarity in Vector Space Models, can be outperformed by a completely unsupervised measure that evaluates the extent of the intersection among the most associated contexts of two target words, weighting such intersection according to the rank of the shared contexts in the dependency ranked lists.

Word Similarity

ROOT13: Spotting Hypernyms, Co-Hyponyms and Randoms

no code implementations29 Mar 2016 Enrico Santus, Tin-Shing Chiu, Qin Lu, Alessandro Lenci, Chu-Ren Huang

In this paper, we describe ROOT13, a supervised system for the classification of hypernyms, co-hyponyms and random words.

Classification General Classification

Nine Features in a Random Forest to Learn Taxonomical Semantic Relations

1 code implementation LREC 2016 Enrico Santus, Alessandro Lenci, Tin-Shing Chiu, Qin Lu, Chu-Ren Huang

When the classification is binary, ROOT9 achieves the following results against the baseline: hypernyms-co-hyponyms 95. 7% vs. 69. 8%, hypernyms-random 91. 8% vs. 64. 1% and co-hyponyms-random 97. 8% vs. 79. 4%.

General Classification

Annotating Events in an Emotion Corpus

no code implementations LREC 2014 Sophia Lee, Shoushan Li, Chu-Ren Huang

This paper presents the development of a Chinese event-based emotion corpus.

Event Structure of Transitive Verb: A MARVS perspective

no code implementations13 Feb 2014 Jia-Fei Hong, Kathleen Ahrens, Chu-Ren Huang

Module-Attribute Representation of Verbal Semantics (MARVS) is a theory of the representation of verbal semantics that is based on Mandarin Chinese data (Huang et al. 2000).

Attribute

Cannot find the paper you are looking for? You can Submit a new open access paper.