no code implementations • ACL 2022 • Gary Ang, Ee-Peng Lim
Most works on financial forecasting use information directly associated with individual companies (e. g., stock prices, news on the company) to predict stock returns for trading.
no code implementations • 1 Apr 2024 • Xiongwei Wu, Sicheng Yu, Ee-Peng Lim, Chong-Wah Ngo
The pre-training phase equips FoodLearner with the capability to align visual information with corresponding textual representations that are specifically related to food, while the second phase adapts both the FoodLearner and the Image-Informed Text Encoder for the segmentation task.
1 code implementation • 15 Mar 2024 • Lei Wang, Ee-Peng Lim
Large language models (LLMs) have shown excellent performance on various NLP tasks.
1 code implementation • 28 Feb 2024 • Lei Wang, Wanyu Xu, Zhiqiang Hu, Yihuai Lan, Shan Dong, Hao Wang, Roy Ka-Wei Lee, Ee-Peng Lim
This paper introduces a new in-context learning (ICL) mechanism called In-Image Learning (I$^2$L) that combines demonstration examples, visual cues, and chain-of-thought reasoning into an aggregated image to enhance the capabilities of Large Multimodal Models (e. g., GPT-4V) in multimodal reasoning tasks.
1 code implementation • 19 Feb 2024 • Hezhe Qiao, Qingsong Wen, XiaoLi Li, Ee-Peng Lim, Guansong Pang
This work considers a practical semi-supervised graph anomaly detection (GAD) scenario, where part of the nodes in a graph are known to be normal, contrasting to the unsupervised setting in most GAD studies with a fully unlabeled graph.
1 code implementation • 4 Dec 2023 • Lei Wang, Jiabang He, Shenshen Li, Ning Liu, Ee-Peng Lim
The fine-grained object attributes and behaviors non-existent in the image may still be generated but not measured by the current evaluation methods.
no code implementations • 1 Dec 2023 • Pei-Chi Lo, Yi-Hang Tsai, Ee-Peng Lim, San-Yih Hwang
Two research questions are formulated to investigate the accuracy of LLMs in recalling information from pre-training knowledge graphs and their ability to infer knowledge graph relations from context.
1 code implementation • 23 Oct 2023 • Yihuai Lan, Zhiqiang Hu, Lei Wang, Yang Wang, Deheng Ye, Peilin Zhao, Ee-Peng Lim, Hui Xiong, Hao Wang
To achieve this goal, we adopt Avalon, a representative communication game, as the environment and use system prompts to guide LLM agents to play the game.
1 code implementation • 11 Oct 2023 • Lei Wang, Songheng Zhang, Yun Wang, Ee-Peng Lim, Yong Wang
To obtain demonstration examples with high-quality explanations, we propose a new explanation generation bootstrapping to iteratively refine generated explanations by considering the previous generation and template-based hint.
3 code implementations • 6 May 2023 • Lei Wang, Wanyu Xu, Yihuai Lan, Zhiqiang Hu, Yunshi Lan, Roy Ka-Wei Lee, Ee-Peng Lim
To address the calculation errors and improve the quality of generated reasoning steps, we extend PS prompting with more detailed instructions and derive PS+ prompting.
1 code implementation • 6 Apr 2023 • Lei Wang, Ee-Peng Lim
Large language models (LLMs) have achieved impressive zero-shot performance in various natural language processing (NLP) tasks, demonstrating their capabilities for inference without training examples.
2 code implementations • 4 Apr 2023 • Zhiqiang Hu, Lei Wang, Yihuai Lan, Wanyu Xu, Ee-Peng Lim, Lidong Bing, Xing Xu, Soujanya Poria, Roy Ka-Wei Lee
The success of large language models (LLMs), like GPT-4 and ChatGPT, has led to the development of numerous cost-effective and accessible alternatives that are created by finetuning open-access LLMs with task-specific data (e. g., ChatDoctor) or instruction data (e. g., Alpaca).
1 code implementation • 16 Oct 2022 • Ning Han, Xun Yang, Ee-Peng Lim, Hao Chen, Qianru Sun
In turn, the frame-level optimization is through gradient descent using the meta loss of video retrieval model computed on the whole video.
1 code implementation • 3 Sep 2022 • Lei Wang, Ee-Peng Lim, Zhiwei Liu, Tianxiang Zhao
Recently, contrastive learning has been applied to the sequential recommendation task to address data sparsity caused by users with few item interactions and items with few user adoptions.
1 code implementation • 3 Sep 2022 • Yunshi Lan, Lei Wang, Jing Jiang, Ee-Peng Lim
To improve the compositional generalization in MWP solving, we propose an iterative data augmentation method that includes diverse compositional variation into training data and could collaborate with MWP methods.
no code implementations • 29 Sep 2021 • Xiongwei Wu, Ee-Peng Lim, Steven Hoi, Qianru Sun
To implement this module, we define two variants of attention: self-attention on the summed-up feature map, and cross-attention between two feature maps before summed up.
1 code implementation • Findings (EMNLP) 2021 • Qiyuan Zhang, Lei Wang, Sicheng Yu, Shuohang Wang, Yang Wang, Jing Jiang, Ee-Peng Lim
While diverse question answering (QA) datasets have been proposed and contributed significantly to the development of deep learning models for QA tasks, the existing datasets fall short in two aspects.
1 code implementation • 2 Sep 2021 • Yihuai Lan, Lei Wang, Qiyuan Zhang, Yunshi Lan, Bing Tian Dai, Yan Wang, Dongxiang Zhang, Ee-Peng Lim
Over the last few years, there are a growing number of datasets and deep learning-based methods proposed for effectively solving MWPs.
Ranked #8 on Math Word Problem Solving on Math23K
2 code implementations • 12 May 2021 • Xiongwei Wu, Xin Fu, Ying Liu, Ee-Peng Lim, Steven C. H. Hoi, Qianru Sun
Existing food image segmentation models are underperforming due to two reasons: (1) there is a lack of high quality food image datasets with fine-grained ingredient labels and pixel-wise location masks -- the existing datasets either carry coarse ingredient labels or are small in size; and (2) the complex appearance of food makes it difficult to localize and recognize ingredients in food images, e. g., the ingredients may overlap one another in the same image, and the identical ingredient may appear distinctly in different food images.
Ranked #3 on Semantic Segmentation on FoodSeg103 (using extra training data)
1 code implementation • EACL 2021 • Lee-Hsun Hsieh, Yang-Yin Lee, Ee-Peng Lim
Pretrained using large amount of data, autoregressive language models are able to generate high quality sequences.
no code implementations • 14 Mar 2021 • Zhiqiang Hu, Roy Ka-Wei Lee, Lei Wang, Ee-Peng Lim, Bo Dai
Authorship attribution (AA), which is the task of finding the owner of a given text, is an important and widely studied research topic with many applications.
1 code implementation • 5 Nov 2020 • V N S Rama Krishna Pinnimty, Matt Zhao, Palakorn Achananuparp, Ee-Peng Lim
We present an invert-and-edit framework to automatically transform facial weight of an input face image to look thinner or heavier by leveraging semantic facial attributes encoded in the latent space of Generative Adversarial Networks (GANs).
no code implementations • 16 Jul 2020 • Amila Silva, Pei-Chi Lo, Ee-Peng Lim
Moreover, we use the stack model to predict the personal values of a large community of Twitter users using their public tweet content and empirically derive several interesting findings about their online behavior consistent with earlier findings in the social science and social media literature.
no code implementations • 14 Jul 2020 • Zhe Liu, Xianzhi Wang, Lina Yao, Jake An, Lei Bai, Ee-Peng Lim
We design a semi-supervised model based on a hierarchical embedding network to extract high-level features of consumers and to predict the top-$N$ purchase destinations of a consumer.
1 code implementation • ACL 2020 • Jipeng Zhang, Lei Wang, Roy Ka-Wei Lee, Yi Bin, Yan Wang, Jie Shao, Ee-Peng Lim
While the recent tree-based neural models have demonstrated promising results in generating solution expression for the math word problem (MWP), most of these models do not capture the relationships and order information among the quantities well.
Ranked #10 on Math Word Problem Solving on Math23K
no code implementations • 9 Mar 2020 • Hao Wang, Doyen Sahoo, Chenghao Liu, Ke Shu, Palakorn Achananuparp, Ee-Peng Lim, Steven C. H. Hoi
Food retrieval is an important task to perform analysis of food-related information, where we are interested in retrieving relevant information about the queried food item such as ingredients, cooking instructions, etc.
Ranked #7 on Cross-Modal Retrieval on Recipe1M
1 code implementation • 5 Mar 2020 • Helena H. Lee, Ke Shu, Palakorn Achananuparp, Philips Kokoh Prasetyo, Yue Liu, Ee-Peng Lim, Lav R. Varshney
Interests in the automatic generation of cooking recipes have been growing steadily over the past few years thanks to a large amount of online cooking recipes.
no code implementations • 6 Feb 2020 • Amila Silva, Pei-Chi Lo, Ee-Peng Lim
To cope with assigning massive number of jobs with RIASEC labels, we propose JPLink, a machine learning approach using the text content in job titles and job descriptions.
no code implementations • 26 Sep 2019 • Doyen Sahoo, Wang Hao, Shu Ke, Wu Xiongwei, Hung Le, Palakorn Achananuparp, Ee-Peng Lim, Steven C. H. Hoi
FoodAI has made food logging convenient, aiding smart consumption and a healthy lifestyle.
1 code implementation • 17 Sep 2019 • Yue Liu, Helena Lee, Palakorn Achananuparp, Ee-Peng Lim, Tzu-Ling Cheng, Shou-De Lin
Human beings are creatures of habit.
1 code implementation • 17 Sep 2019 • Helena Lee, Palakorn Achananuparp, Yue Liu, Ee-Peng Lim, Lav R. Varshney
Consumption of diets with low glycemic impact is highly recommended for diabetics and pre-diabetics as it helps maintain their blood glucose levels.
no code implementations • 27 Aug 2019 • Huozhi Zhou, Lingda Wang, Lav R. Varshney, Ee-Peng Lim
Compared to the original combinatorial semi-bandit problem, our setting assumes the reward distributions of base arms may change in a piecewise-stationary manner at unknown time steps.
2 code implementations • CVPR 2019 • Hao Wang, Doyen Sahoo, Chenghao Liu, Ee-Peng Lim, Steven C. H. Hoi
Food computing is playing an increasingly important role in human daily life, and has found tremendous applications in guiding human behavior towards smart food consumption and healthy lifestyle.
Ranked #8 on Cross-Modal Retrieval on Recipe1M
no code implementations • 4 Sep 2018 • Richard J. Oentaryo, Xavier Jayaraj Siddarth Ashok, Ee-Peng Lim, Philips Kokoh Prasetyo
Its key premise is that the observed career trajectories in OPNs may not necessarily be optimal, and can be improved by learning to maximize the sum of payoffs attainable by following a career path.
no code implementations • 24 Jun 2016 • Richard J. Oentaryo, Ee-Peng Lim, Freddy Chong Tat Chua, Jia-Wei Low, David Lo
The abundance of user-generated data in social media has incentivized the development of methods to infer the latent attributes of users, which are crucially useful for personalization, advertising and recommendation.