Search Results for author: H. Howie Huang

Found 8 papers, 5 papers with code

Don’t Judge a Language Model by Its Last Layer: Contrastive Learning with Layer-Wise Attention Pooling

1 code implementation COLING 2022 Dongsuk Oh, Yejin Kim, Hodong Lee, H. Howie Huang, Heuiseok Lim

Recent pre-trained language models (PLMs) achieved great success on many natural language processing tasks through learning linguistic features and contextualized sentence representation.

Contrastive Learning Language Modelling +3

Improving Content Recommendation: Knowledge Graph-Based Semantic Contrastive Learning for Diversity and Cold-Start Users

no code implementations27 Mar 2024 Yejin Kim, Scott Rome, Kevin Foley, Mayur Nankani, Rimon Melamed, Javier Morales, Abhay Yadav, Maria Peifer, Sardar Hamidian, H. Howie Huang

It is essential to provide recommendations that are both personalized and diverse, rather than solely relying on achieving high rank-based performance, such as Click-through Rate, Recall, etc.

Contrastive Learning Descriptive +4

Prompt have evil twins

1 code implementation13 Nov 2023 Rimon Melamed, Lucas H. McCabe, Tanay Wakhare, Yejin Kim, H. Howie Huang, Enric Boix-Adsera

We discover that many natural-language prompts can be replaced by corresponding prompts that are unintelligible to humans but that provably elicit similar behavior in language models.

Illuminati: Towards Explaining Graph Neural Networks for Cybersecurity Analysis

1 code implementation26 Mar 2023 Haoyu He, Yuede Ji, H. Howie Huang

Given a graph and a pre-trained GNN model, Illuminati is able to identify the important nodes, edges, and attributes that are contributing to the prediction while requiring no prior knowledge of GNN models.

Fraud Detection Vulnerability Detection

Don't Judge a Language Model by Its Last Layer: Contrastive Learning with Layer-Wise Attention Pooling

1 code implementation13 Sep 2022 Dongsuk Oh, Yejin Kim, Hodong Lee, H. Howie Huang, Heuiseok Lim

Recent pre-trained language models (PLMs) achieved great success on many natural language processing tasks through learning linguistic features and contextualized sentence representation.

Contrastive Learning Language Modelling +3

A Graph Attention Based Approach for Trajectory Prediction in Multi-agent Sports Games

no code implementations18 Dec 2020 Ding Ding, H. Howie Huang

In this paper, we propose a spatial-temporal trajectory prediction approach that is able to learn the strategy of a team with multiple coordinated agents.

Graph Attention Trajectory Prediction

Cannot find the paper you are looking for? You can Submit a new open access paper.