no code implementations • 25 Mar 2021 • Sansiri Tarnpradab, Fereshteh Jafariakinabad, Kien A. Hua
In this scheme, Bi-LSTM derives a representation that comprises information of the whole sentence and whole thread; whereas, CNN captures most informative features with respect to context from sentence and thread.
1 code implementation • 14 Oct 2020 • Fereshteh Jafariakinabad, Kien A. Hua
Due to the n-to-1 mapping of words to their structural labels, each word will be embedded into a vector representation which mainly carries structural information.
no code implementations • 19 Sep 2019 • Sansiri Tarnpradab, Kien A. Hua
The prevalence of social media has made information sharing possible across the globe.
no code implementations • 12 Sep 2019 • Fereshteh Jafariakinabad, Kien A. Hua
Writing style is a combination of consistent decisions associated with a specific author at different levels of language production, including lexical, syntactic, and structural.
no code implementations • 9 May 2019 • Naifan Zhuang, Guo-Jun Qi, The Duc Kieu, Kien A. Hua
The Long Short-Term Memory (LSTM) recurrent neural network is capable of processing complex sequential information since it utilizes special gating schemes for learning representations from long input sequences.
no code implementations • 26 Feb 2019 • Fereshteh Jafariakinabad, Sansiri Tarnpradab, Kien A. Hua
In this paper, we introduce a syntactic recurrent neural network to encode the syntactic patterns of a document in a hierarchical structure.
no code implementations • 30 May 2018 • Kevin Joslyn, Naifan Zhuang, Kien A. Hua
Music generation research has grown in popularity over the past decade, thanks to the deep learning revolution that has redefined the landscape of artificial intelligence.
no code implementations • 25 May 2018 • Sansiri Tarnpradab, Fei Liu, Kien A. Hua
Forum threads are lengthy and rich in content.
no code implementations • 11 Apr 2018 • Naifan Zhuang, The Duc Kieu, Guo-Jun Qi, Kien A. Hua
The proposed model progressively builds up the ability of the LSTM gates to detect salient dynamical patterns in deeper stacked layers modeling higher orders of DoS, and thus the proposed LSTM model is termed deep differential Recurrent Neural Network (d2RNN).
no code implementations • 6 Jun 2015 • Jun Ye, Hao Hu, Kai Li, Guo-Jun Qi, Kien A. Hua
With the prevalence of the commodity depth cameras, the new paradigm of user interfaces based on 3D motion capturing and recognition have dramatically changed the way of interactions between human and computers.
no code implementations • 19 Mar 2015 • Kai Li, Guo-Jun Qi, Jun Ye, Kien A. Hua
In this work, we propose a novel hash learning framework that encodes feature's rank orders instead of numeric values in a number of optimal low-dimensional ranking subspaces.