no code implementations • 28 Mar 2024 • Binzong Geng, ZhaoXin Huan, Xiaolu Zhang, Yong He, Liang Zhang, Fajie Yuan, Jun Zhou, Linjian Mo
However, we argue that a critical obstacle remains in deploying LLMs for practical use: the efficiency of LLMs when processing long textual user behaviors.
no code implementations • 21 Mar 2024 • Yong He, Hongshan Yu, Muhammad Ibrahim, Xiaoyan Liu, Tongjia Chen, Anwaar Ulhaq, Ajmal Mian
This strategy allows various transformer blocks to share the same position information over the same resolution points, thereby reducing network parameters and training time without compromising accuracy. Experimental comparisons with existing methods on multiple datasets demonstrate the efficacy of SMTransformer and skip-attention-based up-sampling for point cloud processing tasks, including semantic segmentation and classification.
no code implementations • 12 Mar 2024 • Zeyu Li, Kangxiang Qin, Yong He, Wang Zhou, Xinsheng Zhang
In the first step, we integrate the shared subspace information across multiple studies by a proposed method named as Grassmannian barycenter, instead of directly performing PCA on the pooled dataset.
no code implementations • 26 Oct 2023 • Christian Herglotz, Matthias Kränzler, Xixue Chu, Edouard Francois, Yong He, André Kaup
In this paper, we discuss one aspect of the latest MPEG standard edition on energy-efficient media consumption, also known as Green Metadata (ISO/IEC 232001-11), which is the interactive signaling for remote decoder-power reduction for peer-to-peer video conferencing.
no code implementations • 31 Aug 2023 • ZhaoXin Huan, Ke Ding, Ang Li, Xiaolu Zhang, Xu Min, Yong He, Liang Zhang, Jun Zhou, Linjian Mo, Jinjie Gu, Zhongyi Liu, Wenliang Zhong, Guannan Zhang
3) AntM$^{2}$C provides 1 billion CTR data with 200 features, including 200 million users and 6 million items.
no code implementations • 8 Mar 2023 • Yong He, Hongshan Yu, Zhengeng Yang, Xiaoyan Liu, Wei Sun, Ajmal Mian
In particular, we achieve state-of-the-art semantic segmentation results of 76% mIoU on S3DIS 6-fold and 72. 2% on S3DIS Area5.
no code implementations • 8 Mar 2023 • Yong He, Hongshan Yu, Zhengeng Yang, Wei Sun, Mingtao Feng, Ajmal Mian
Local features and contextual dependencies are crucial for 3D point cloud analysis.
no code implementations • 31 Jan 2023 • Xin Dong, Ruize Wu, Chao Xiong, Hai Li, Lei Cheng, Yong He, Shiyou Qian, Jian Cao, Linjian Mo
GDOD decomposes gradients into task-shared and task-conflict components explicitly and adopts a general update rule for avoiding interference across all task gradients.
no code implementations • 8 Oct 2022 • Yong He, Cheng Wang, Shun Zhang, Nan Li, Zhaorong Li, Zhenyu Zeng
Herein, we develop a new model called KG-MTT-BERT (Knowledge Graph Enhanced Multi-Type Text BERT) by extending the BERT model for long and multi-type text with the integration of the medical knowledge graph.
no code implementations • 27 Jun 2022 • Ruiming Du, Zhihong Ma, Pengyao Xie, Yong He, Haiyan Cen
This study proves that the deep-learning-based point cloud segmentation method has a great potential for resolving dense plant point clouds with complex morphological traits.
no code implementations • 23 Apr 2021 • Fan Zhang, Alessandro Daducci, Yong He, Simona Schiavi, Caio Seguin, Robert Smith, Chun-Hung Yeh, Tengda Zhao, Lauren J. O'Donnell
Diffusion magnetic resonance imaging (dMRI) tractography is an advanced imaging technique that enables in vivo mapping of the brain's white matter connections at macro scale.
no code implementations • CVPR 2021 • Zhikai Chen, Lingxi Xie, Shanmin Pang, Yong He, Bo Zhang
This paper presents MagDR, a mask-guided detection and reconstruction pipeline for defending deepfakes from adversarial attacks.
no code implementations • 9 Mar 2021 • Yong He, Hongshan Yu, Xiaoyan Liu, Zhengeng Yang, Wei Sun, Ajmal Mian
This paper fills the gap and provides a comprehensive survey of the recent progress made in deep learning based 3D segmentation.
no code implementations • 18 Dec 2020 • Zhengeng Yang, Hongshan Yu, Yong He, Zhi-Hong Mao, Ajmal Mian
By learning to solve a Jigsaw Puzzle problem with 25 patches and transferring the learned features to semantic segmentation task on Cityscapes dataset, we achieve a 5. 8 percentage point improvement over the baseline model that initialized from random values.
no code implementations • 8 Jul 2020 • Sanchu Han, Yong He, Yin Ding
OEMs and new entrants can take the Mobility as a Service market (MaaS) as the entry point, upgrade its E/E (Electric and Electronic) architecture to be C/C (Computing and Communication) architecture, build one open software defined and data driven software platform for its production and service model, use efficient and collaborative ways of vehicles, roads, cloud and network to continuously improve core technologies such as autonomous driving, provide MaaS operators with an affordable and agile platform.
1 code implementation • 20 Apr 2020 • Yong He, PengFei Liu, Xinsheng Zhang, Wang Zhou
We construct a Median-of-Means (MOM) estimator for the centered log-ratio covariance matrix and propose a thresholding procedure that is adaptive to the variability of individual entries.
Methodology
no code implementations • 23 Mar 2020 • Long Yu, Yong He, Xin-bing Kong, Xinsheng Zhang
In this study, we propose a projection estimation method for large-dimensional matrix factor models with cross-sectionally spiked eigenvalues.
Methodology
no code implementations • 10 Dec 2019 • Zhikai Chen, Lingxi Xie, Shanmin Pang, Yong He, Qi Tian
There have been many efforts in attacking image classification models with adversarial perturbations, but the same topic on video classification has not yet been thoroughly studied.
1 code implementation • 14 Aug 2019 • Yong He, Xinbing Kong, Long Yu, Xinsheng Zhang
Large-dimensional factor model has drawn much attention in the big-data era, in order to reduce the dimensionality and extract underlying features using a few latent common factors.
Methodology
no code implementations • 30 Mar 2016 • Bin Wang, Zhijian Ou, Yong He, Akinori Kawamura
The dominant language models (LMs) such as n-gram and neural network (NN) models represent sentence probabilities in terms of conditionals.