no code implementations • SEMEVAL 2020 • Adithya Avvaru, Sanath Vobilisetty
Our system built using state-of-the-art Transformer-based pre-trained Bidirectional Encoder Representations from Transformers (BERT) performed better compared to baseline models for the two tasks A and C and performed close to the baseline model for task B.
no code implementations • WS 2020 • Adithya Avvaru, Sanath Vobilisetty, Radhika Mamidi
Sarcasm detection, regarded as one of the sub-problems of sentiment analysis, is a very typical task because the introduction of sarcastic words can flip the sentiment of the sentence itself.
no code implementations • SEMEVAL 2019 • Adithya Avvaru, P, Anupam ey
The strengths of the scalable gradient tree boosting algorithm, XGBoost and distributed sentence encoder, Skip-Thought Vectors are not explored yet by the cQA research community.
no code implementations • PACLIC 2018 • Subba Reddy Oota, Adithya Avvaru, Mounika Marreddy, Radhika Mamidi
We compared the results of our Experts Model with both baseline results and top five performers of SemEval-2018 Task-1, Affect in Tweets (AIT).
no code implementations • 26 Nov 2018 • Subba Reddy Oota, Adithya Avvaru, Naresh Manwani, Raju S. Bapi
We argue that each expert learns a certain region of brain activations corresponding to its category of words, which solves the problem of identifying the regions with a simple encoding model.