no code implementations • AACL (knlp) 2020 • Katikapalli Subramanyam Kalyan, Sivanesan Sangeetha
It is the first work to leverage both concept description and synonyms to represent concepts in the form of retrofitted target concept vectors in text similarity framework based social media MCN.
no code implementations • EMNLP (Louhi) 2020 • Katikapalli Subramanyam Kalyan, Sivanesan Sangeetha
Second, we find cosine similarity between embeddings of input concept mention and all the target concepts.
no code implementations • EMNLP (DeeLIO) 2020 • Katikapalli Subramanyam Kalyan, Sivanesan Sangeetha
In our model, we integrate a) target concept information in the form of target concept vectors generated by encoding target concept descriptions using SRoBERTa, state-of-the-art RoBERTa based sentence embedding model and b) domain lexicon knowledge by enriching target concept vectors with synonym relationship knowledge using retrofitting algorithm.
no code implementations • 4 Oct 2023 • Katikapalli Subramanyam Kalyan
Large language models (LLMs) are a special class of pretrained language models obtained by scaling model size, pretraining corpus and computation.
1 code implementation • 12 Aug 2021 • Katikapalli Subramanyam Kalyan, Ajit Rajasekharan, Sivanesan Sangeetha
These models are built on the top of transformers, self-supervised learning and transfer learning.
no code implementations • 16 Apr 2021 • Katikapalli Subramanyam Kalyan, Ajit Rajasekharan, Sivanesan Sangeetha
We strongly believe there is a need for a survey paper that can provide a comprehensive survey of various transformer-based biomedical pretrained language models (BPLMs).
no code implementations • 25 Jan 2021 • Katikapalli Subramanyam Kalyan, Sivanesan Sangeetha
The concept vectors generated using the Sentence BERT model based on SapBERT and retrofitted using UMLS-related concepts achieved the best results on all four datasets.
no code implementations • SMM4H (COLING) 2020 • Katikapalli Subramanyam Kalyan, S. Sangeetha
Extracting ADR mentions is treated as sequence labeling and normalizing ADR mentions is treated as multi-class classification.
no code implementations • 7 Jun 2020 • Katikapalli Subramanyam Kalyan, S. Sangeetha
Second, it finds cosine similarity between embeddings of input concept mention and all the target concepts.