no code implementations • 12 Mar 2024 • Mingyue Cheng, Hao Zhang, Qi Liu, Fajie Yuan, Zhi Li, Zhenya Huang, Enhong Chen, Jun Zhou, Longfei Li
It is also significant to model the \textit{semantic relatedness} reflected in content features, e. g., images and text.
no code implementations • 6 Feb 2024 • Zhixuan Chu, Yan Wang, Feng Zhu, Lu Yu, Longfei Li, Jinjie Gu
The advent of large language models (LLMs) such as ChatGPT, PaLM, and GPT-4 has catalyzed remarkable advances in natural language processing, demonstrating human-like language fluency and reasoning capacities.
no code implementations • 16 Jan 2024 • Zhixuan Chu, Yan Wang, Qing Cui, Longfei Li, Wenqing Chen, Zhan Qin, Kui Ren
As personalized recommendation systems become vital in the age of information overload, traditional methods relying solely on historical user interactions often fail to fully capture the multifaceted nature of human interests.
no code implementations • 20 Dec 2023 • Zhixuan Chu, Mengxuan Hu, Qing Cui, Longfei Li, Sheng Li
To address this, we propose a Task-Driven Causal Feature Distillation model (TDCFD) to transform original feature values into causal feature attributions for the specific risk prediction task.
no code implementations • 20 Dec 2023 • Li Wang, Xiaohua Zhang, Longfei Li, Hongyun Meng, Xianghai Cao
Spectral unmixing is a significant challenge in hyperspectral image processing.
no code implementations • 4 Dec 2023 • Yanchu Guan, Dong Wang, Zhixuan Chu, Shiyu Wang, Feiyue Ni, Ruihua Song, Longfei Li, Jinjie Gu, Chenyi Zhuang
This paper proposes a novel LLM-based virtual assistant that can automatically perform multi-step operations within mobile apps based on high-level user requests.
no code implementations • 7 Oct 2023 • Zhixuan Chu, Huaiyu Guo, Xinyuan Zhou, Yijia Wang, Fei Yu, Hong Chen, Wanqing Xu, Xin Lu, Qing Cui, Longfei Li, Jun Zhou, Sheng Li
Large language models (LLMs) show promise for natural language tasks but struggle when applied directly to complex domains like finance.
no code implementations • 6 Sep 2023 • Yan Wang, Zhixuan Chu, Tao Zhou, Caigao Jiang, Hongyan Hao, Minjie Zhu, Xindong Cai, Qing Cui, Longfei Li, james Y zhang, Siqiao Xue, Jun Zhou
Asynchronous time series, also known as temporal event sequences, are the basis of many applications throughout different industries.
no code implementations • 23 Aug 2023 • Yueqi Wang, Yoni Halpern, Shuo Chang, Jingchen Feng, Elaine Ya Le, Longfei Li, Xujian Liang, Min-Cheng Huang, Shane Li, Alex Beutel, Yaping Zhang, Shuchao Bi
In this work, we incorporate explicit and implicit negative user feedback into the training objective of sequential recommenders in the retrieval stage using a "not-to-recommend" loss function that optimizes for the log-likelihood of not recommending items with negative feedback.
no code implementations • 21 Aug 2023 • Zhixuan Chu, Hongyan Hao, Xin Ouyang, Simeng Wang, Yan Wang, Yue Shen, Jinjie Gu, Qing Cui, Longfei Li, Siqiao Xue, james Y zhang, Sheng Li
In this paper, we propose RecSysLLM, a novel pre-trained recommendation model based on LLMs.
no code implementations • 21 Aug 2023 • Yan Wang, Zhixuan Chu, Xin Ouyang, Simeng Wang, Hongyan Hao, Yue Shen, Jinjie Gu, Siqiao Xue, james Y zhang, Qing Cui, Longfei Li, Jun Zhou, Sheng Li
In this paper, we propose a novel approach that leverages large language models (LLMs) to construct personalized reasoning graphs.
no code implementations • 19 May 2023 • Ya-Lin Zhang, Jun Zhou, Yankun Ren, Yue Zhang, Xinxing Yang, Meng Li, Qitao Shi, Longfei Li
In this paper, we consider the problem of long tail scenario modeling with budget limitation, i. e., insufficient human resources for model training stage and limited time and computing resources for model inference stage.
no code implementations • 13 Feb 2023 • Feng Zhu, Mingjie Zhong, Xinxing Yang, Longfei Li, Lu Yu, Tiehua Zhang, Jun Zhou, Chaochao Chen, Fei Wu, Guanfeng Liu, Yan Wang
In recommendation scenarios, there are two long-standing challenges, i. e., selection bias and data sparsity, which lead to a significant drop in prediction accuracy for both Click-Through Rate (CTR) and post-click Conversion Rate (CVR) tasks.
1 code implementation • NeurIPS 2023 • Yongduo Sui, Qitian Wu, Jiancan Wu, Qing Cui, Longfei Li, Jun Zhou, Xiang Wang, Xiangnan He
From the perspective of invariant learning and stable learning, a recently well-established paradigm for out-of-distribution generalization, stable features of the graph are assumed to causally determine labels, while environmental features tend to be unstable and can lead to the two primary types of distribution shifts.
no code implementations • 1 Nov 2022 • Xinyu Li, Yilin Li, Qing Cui, Longfei Li, Jun Zhou
In the era of big data, the explosive growth of multi-source heterogeneous data offers many exciting challenges and opportunities for improving the inference of conditional average treatment effects.
no code implementations • 8 Jan 2022 • Yeqi Wang, Longfei Li, Cheng Li, Yan Xi, Hairong Zheng, Yusong Lin, Shanshan Wang
Geometric manifolds of hand-crafted features and learned features are constructed to mine the implicit relationship between deep learning and radiomics, and therefore to dig mutual consent and essential representation for the glioma grades.
no code implementations • 23 Dec 2021 • Longfei Li, Rui Yang, Xin Chen, Cheng Li, Hairong Zheng, Yusong Lin, Zaiyi Liu, Shanshan Wang
Prostate Imaging Reporting and Data System (PI-RADS) based on multi-parametric MRI classi\^ees patients into 5 categories (PI-RADS 1-5) for routine clinical diagnosis guidance.
no code implementations • 18 Aug 2021 • Feng Zhu, Yan Wang, Jun Zhou, Chaochao Chen, Longfei Li, Guanfeng Liu
Moreover, to avoid negative transfer, we further propose a Personalized training strategy to minimize the embedding difference of common entities between a richer dataset and a sparser dataset, deriving three new models, i. e., GA-DTCDR-P, GA-MTCDR-P, and GA-CDR+CSR-P, for the three scenarios respectively.
no code implementations • 2 Mar 2021 • Feng Zhu, Yan Wang, Chaochao Chen, Jun Zhou, Longfei Li, Guanfeng Liu
To address the long-standing data sparsity problem in recommender systems (RSs), cross-domain recommendation (CDR) has been proposed to leverage the relatively richer information from a richer domain to improve the recommendation performance in a sparser domain.
no code implementations • 13 Dec 2020 • Kai Zhang, Hao Qian, Qing Cui, Qi Liu, Longfei Li, Jun Zhou, Jianhui Ma, Enhong Chen
In the Click-Through Rate (CTR) prediction scenario, user's sequential behaviors are well utilized to capture the user interest in the recent literature.
no code implementations • 2 Sep 2020 • Lu Yu, Shichao Pei, Lizhong Ding, Jun Zhou, Longfei Li, Chuxu Zhang, Xiangliang Zhang
This paper studies learning node representations with graph neural networks (GNNs) for unsupervised scenario.
no code implementations • 16 Mar 2020 • Ya-Lin Zhang, Longfei Li
Multi-task learning (MTL) aims at improving the generalization performance of several related tasks by leveraging useful information contained in them.
no code implementations • 5 Mar 2020 • Qitao Shi, Ya-Lin Zhang, Longfei Li, Xinxing Yang, Meng Li, Jun Zhou
Machine learning techniques have been widely applied in Internet companies for various tasks, acting as an essential driving force, and feature engineering has been generally recognized as a crucial tache when constructing machine learning systems.
no code implementations • 26 Dec 2019 • Longfei Li, Ziqi Liu, Chaochao Chen, Ya-Lin Zhang, Jun Zhou, Xiaolong Li
With online payment platforms being ubiquitous and important, fraud transaction detection has become the key for such platforms, to ensure user account safety and platform security.
no code implementations • ICLR 2020 • Ruofan Liang, Tianlin Li, Longfei Li, Jing Wang, Quanshi Zhang
As a generic tool, our method can be broadly used for different applications.
no code implementations • 11 May 2018 • Ya-Lin Zhang, Jun Zhou, Wenhao Zheng, Ji Feng, Longfei Li, Ziqi Liu, Ming Li, Zhiqiang Zhang, Chaochao Chen, Xiaolong Li, Zhi-Hua Zhou, YUAN, QI
This model can block fraud transactions in a large amount of money each day.
no code implementations • 17 Apr 2018 • Longfei Li, Peilin Zhao, Jun Zhou, Xiaolong Li
However, to choose the rank properly, it usually needs to run the algorithm for many times using different ranks, which clearly is inefficient for some large-scale datasets.
no code implementations • 13 Apr 2018 • Chaochao Chen, Ziqi Liu, Peilin Zhao, Longfei Li, Jun Zhou, Xiaolong Li
The experimental results demonstrate that, comparing with the classic and state-of-the-art (distributed) latent factor models, DCH has comparable performance in terms of recommendation accuracy but has both fast convergence speed in offline model training procedure and realtime efficiency in online recommendation procedure.
3 code implementations • 3 Feb 2018 • Ziqi Liu, Chaochao Chen, Longfei Li, Jun Zhou, Xiaolong Li, Le Song, Yuan Qi
We present, GeniePath, a scalable approach for learning adaptive receptive fields of neural networks defined on permutation invariant graph data.